Sunday, October 19, 2014

[tt] WSJ: Why Aristotle Was Wrong

Why Aristotle Was Wrong; The world's first naturalist mixed notes made
from directly observing animals with ludicrous mythical tales. A review of
Armand Marie Leroi's 'The Lagoon.'
Budiansky, Stephen. Wall Street Journal, 17 Oct 2014.

Aristotle, the world's first polymath, was born in 384 B.C., studied at
Plato's Academy in Athens, and wrote with insatiable curiosity and
dazzling originality--interspersed with breathtaking lapses in logic,
facts and common sense--on everything from poetry to the movement of the
planets to the nature of human happiness.

As Armand Leroi observes at the outset of his long, meandering tour
through Aristotle's scientific writings, the study of Aristotle has been
the property of philosophers and classicists. "But the subject he loved
most was biology," suggests Mr. Leroi. He sets out to stake a claim for
the proposition that a modern biologist such as himself (besides being a
host of science shows on British television, Mr. Leroi is a professor of
biology at Imperial College, London) "may just occasionally see in
Aristotle's writings something that the philologists and philosophers have

The author's excitement at recognizing a scientific kindred spirit in an
ancient Greek who lived two millennia before the founders of modern
biology animates his account. Aristotle may have lacked a microscope or a
knowledge of DNA or evolutionary theory, but he "speaks directly to any
biologist's heart" in wanting above all to know why: Why do fish have
gills instead of lungs? Why do humans walk upright? Why do eagles lay so
few eggs? Why do animals grow old and die?

Mr. Leroi is frequently able to make good on his ambition to pick up on
points that philosophers and historians have missed. He uses Aristotle's
at times remarkably acute biological insights as a springboard for some
beautifully clear explanations of some very difficult concepts in modern
biology. Aristotle seems to have noticed, for instance, the interlocking
reproductive adaptations that permit egg-laying fish to compensate for the
high mortality of embryos: in such species, the females are larger than
the males, they produce vast numbers of small eggs and the fertilized
embryos grow rapidly to minimize their exposure to predators.

The author's enthusiasm carries him away, however. He admits at the outset
that Aristotle was often spectacularly and even ridiculously wrong about a
lot of things. Mr. Leroi sticks, nonetheless, to pressing a sweeping
claim: that (as his subtitle puts it) "Aristotle invented science" and
that, whether we know it or not, modern science "absorbed the very
structure of his thought. . . . His ideas flow like a subterranean river
through the history of our science."

There are two problems with this. One is that it is awfully hard to make a
convincing case for the existence of a subconscious influence. A modern
physicist uses in everything he does the laws of motion described by
Newton 400 years ago; an engineer uses the precise mathematical tools
brilliantly conceived by the inventors of calculus, Newton and Leibniz; in
biology, the systematic approach to classification of Linnaeus and the
evolutionary theories of Darwin are the very cornerstones of everything
that has happened since. Try as he might, Mr. Leroi can offer no more than
a vague symbolic or spiritual link between the research of a 21st-century
scientist such as himself and the writings of Aristotle.

The other problem is that Aristotle has a lot to answer for. His ideas
about physics and astronomy (which Mr. Leroi mostly, and conveniently,
ignores) were wrong in every important respect. Aristotle thought
celestial bodies moved in perfect circles, failed to recognize inertia,
asserted that velocity--not acceleration-- was proportional to applied
force, rejected atoms, and argued that the Earth and the heavens were made
of totally different substances. As Bertrand Russell observed: "Throughout
modern times, practically every advance in science, in logic, or in
philosophy has had to be made in the teeth of opposition from Aristotle's

Mr. Leroi's determination to defend Aristotle against such charges leads
him to increasingly strained excuses for his hero's lapses. The author
acknowledges that Aristotle's ideas about the development of the embryo
are "quite strange." Among other things, Aristotle thought that eggs arose
from the congealing of semen and menstrual fluids and that semen is
hyper-refined blood, which only males are able to generate because they
are "warmer" than females. But still, Mr. Leroi lamely concludes, "you
have to admire the sheer audacity of his system."

Aristotle indiscriminately mixed direct observations of animals with the
most ludicrous mythical tales (lizards crawl into the noses of donkeys and
stop them from eating). Mr. Leroi excuses this by suggesting, with no
evidence whatsoever, that Aristotle at the same time must have "silently
suppressed vast amounts of dubious data." He pusillanimously sidesteps
Aristotle's unpleasant views about the subservience of women to men and
the natural morality of slavery by suggesting that all Aristotle is saying
is that some people are better suited than others to be managers, a view
that he implies is really no different from modern corporate hiring
practices. And while he indignantly rejects the contention that the legacy
of Aristotle's ideas held back scientific progress for centuries, even he
is finally forced to admit that Aristotle's theories of spontaneous
generation "had a baleful effect on early modern science."

The author's decision to structure the book in part as a personal memoir
would have worked better if the autobiographical anecdotes were not so
humorlessly self-regarding and the bits of Aegean travelogue less clichéd
(there are a lot of picturesque villagers sitting in cafes, invariably
"sipping ouzo"). But I also could not help feeling that the book might
have been better without its central character in places. Mr. Leroi is at
his best explaining the ideas of modern developmental and evolutionary
biology. Aristotle often seemed to keep getting in the way, forever
diverting any attempts at sane discussion with his wild philosophizing,
like some cranky uncle at the Thanksgiving dinner table. It was in fact
Aristotle's disdain for what we now recognize as the true invention of
modern science--the experimental method--that makes him such a maddening
figure, so brilliant, so precociously right at times, so avoidably
muddle-headed at others.

Mr. Budiansky is the author, most recently, of "Mad Music: Charles Ives,
the Nostalgic Rebel."

[tt] NYT: Konika Banerjee and Paul Bloom: Does Everything Happen for a Reason?

Konika Banerjee and Paul Bloom: Does Everything Happen for a Reason?

Konika Banerjee is a graduate student and Paul Bloom is a professor,
both of psychology, at Yale.

ON April 15, 2013, James Costello was cheering on a friend near the
finish line at the Boston Marathon when the bombs exploded, severely
burning his arms and legs and sending shrapnel into his flesh.
During the months of surgery and rehabilitation that followed, Mr.
Costello developed a relationship with one of his nurses, Krista
D'Agostino, and they soon became engaged. Mr. Costello posted a
picture of the ring on Facebook. "I now realize why I was involved
in the tragedy," he wrote. "It was to meet my best friend, and the
love of my life."

Mr. Costello is not alone in finding meaning in life events. People
regularly do so for both terrible incidents, such as being injured
in an explosion, and positive ones, like being cured of a serious
disease. As the phrase goes, everything happens for a reason.

Where does this belief come from? One theory is that it reflects
religious teachings--we think that events have meaning because
we believe in a God that plans for us, sends us messages, rewards
the good and punishes the bad.

But research from the Yale Mind and Development Lab, where we work,
suggests that this can't be the whole story. In one series of
studies, recently published in the journal Cognition, we asked
people to reflect on significant events from their own lives, such
as graduations, the births of children, falling in love, the deaths
of loved ones and serious illnesses. Unsurprisingly, a majority of
religious believers said they thought that these events happened for
a reason and that they had been purposefully designed (presumably by
God). But many atheists did so as well, and a majority of atheists
in a related study also said that they believed in fate--defined
as the view that life events happen for a reason and that there is
an underlying order to life that determines how events turn out.

Olimpia Zagnoli

These atheists' responses weren't just the product of living in
America's highly religious society. Research done at Queen's
University in Belfast by the psychologists Bethany Heywood and Jesse
Bering found that British atheists were just as likely as American
atheists to believe that their life events had underlying purposes,
even though Britain is far less religious than America.

In other studies, scheduled to be published online next week in the
journal Child Development, we found that even young children show a
bias to believe that life events happen for a reason--to "send a
sign" or "to teach a lesson." This belief exists regardless of how
much exposure the children have had to religion at home, and even if
they've had none at all.

This tendency to see meaning in life events seems to reflect a more
general aspect of human nature: our powerful drive to reason in
psychological terms, to make sense of events and situations by
appealing to goals, desires and intentions. This drive serves us
well when we think about the actions of other people, who actually
possess these psychological states, because it helps us figure out
why people behave as they do and to respond appropriately. But it
can lead us into error when we overextend it, causing us to infer
psychological states even when none exist. This fosters the illusion
that the world itself is full of purpose and design.

Some people are more prone to find meaning than others. In
large-scale survey studies also reported in the journal Cognition,
we found that highly paranoid people (who tend to obsess over other
people's hidden motives and intentions) and highly empathetic people
(who think deeply about other people's goals and emotions) are
particularly likely to believe in fate and to believe that there are
hidden messages and signs embedded in their own life events. In
other words, the more likely people are to think about other
people's purposes and intentions, the more likely they are to also
infer purpose and intention in human life itself.

WHATEVER the origin of our belief in life's meaning, it might seem
to be a blessing. Some people find it reassuring to think that there
really are no accidents, that what happens to us--including the
most terrible of events--reflects an unfolding plan. But the
belief also has some ugly consequences. It tilts us toward the view
that the world is a fundamentally fair place, where goodness is
rewarded and badness punished. It can lead us to blame those who
suffer from disease and who are victims of crimes, and it can
motivate a reflexive bias in favor of the status quo--seeing
poverty, inequality and oppression as reflecting the workings of a
deep and meaningful plan.

Not everyone would go as far as the atheist Richard Dawkins, who has
written that the universe exhibits "precisely the properties we
should expect if there is, at bottom, no design, no purpose, no
evil, and no good, nothing but blind, pitiless indifference." But
even those who are devout should agree that, at least here on Earth,
things just don't naturally work out so that people get what they
deserve. If there is such a thing as divine justice or karmic
retribution, the world we live in is not the place to find it.
Instead, the events of human life unfold in a fair and just manner
only when individuals and society work hard to make this happen.

We should resist our natural urge to think otherwise.
tt mailing list

[tt] NYT: Therese Huston: Are Women Better Decision Makers?

Therese Huston: Are Women Better Decision Makers?

Therese Huston is a cognitive psychologist at Seattle University who
is working on a book about women and decision making.

RECENTLY, Senator Kirsten Gillibrand of New York said that if we
want to fix the gridlock in Congress, we need more women. Women are
more focused on finding common ground and collaborating, she argued.
But there's another reason that we'd benefit from more women in
positions of power, and it's not about playing nicely.

Neuroscientists have uncovered evidence suggesting that, when the
pressure is on, women bring unique strengths to decision making.

Mara Mather, a cognitive neuroscientist at the University of
Southern California, and Nichole R. Lighthall, a cognitive
neuroscientist now at Duke University, are two of the many
researchers who have found that under normal circumstances, when
everything is low-key and manageable, men and women make decisions
about risk in similar ways. We gather the best information we can,
we weigh potential costs against potential gains, and then we choose
how to act. But add stress to the situation--replicated in the lab
by having participants submerge their hands in painfully cold,
35-degree water--and men and women begin to part ways.

Dr. Mather and her team taught people a simple computer gambling
game, in which they got points for inflating digital balloons. The
more they inflated each balloon, the greater its value, and the risk
of popping it. When they were relaxed, men and women took similar
risks and averaged a similar number of pumps. But after experiencing
the cold water, the stressed women stopped sooner, cashing out their
winnings and going with the more guaranteed win. Stressed men did
just the opposite. They kept pumping--in one study averaging about
50 percent more pumps than the women--and risking more. In this
experiment, the men's risk-taking earned them more points. But that
wasn't always the case.

In another experiment, researchers asked participants to draw cards
from multiple decks, some of which were safe, providing frequent
small rewards, and others risky, with infrequent but bigger rewards.
They found that the most stressed men drew 21 percent more cards
from the risky decks than from the safe ones, compared to the most
stressed women, losing more over all.

Across a variety of gambles, the findings were the same: Men took
more risks when they were stressed. They became more focused on big
wins, even when they were costly and less likely.

Levels of the stress hormone cortisol appear to be a major factor,
according to Ruud van den Bos, a neurobiologist at Radboud
University in the Netherlands. He and his colleagues have found that
the tendency to take more risks when under pressure is stronger in
men who experience a larger spike in cortisol. But in women he found
that a slight increase in cortisol seemed actually to improve
decision-making performance.

Are we all aware when our decision making skews under stress?
Unfortunately not. In a 2007 study, Stephanie D. Preston, a
cognitive neuroscientist at the University of Michigan, and her
colleagues told people that after 20 minutes, they would have to
give a talk and would be judged on their speaking abilities. But
first, they had to play a gambling game. Anxious, both men and women
initially had a harder time making good decisions in the game.

But the closer the women got to the stressful event, the better
their decision making became. Stressed women tended to make more
advantageous decisions, looking for smaller, surer successes. Not so
for the stressed men. The closer the timer got to zero, the more
questionable the men's decision making became, risking a lot for the
slim chance of a big achievement.

The men were also less aware that they had used a risky strategy. In
the last few minutes of the game, Dr. Preston interrupted each
person immediately after he or she had just lost money. She asked
people to rate how risky each of their possible choices had been,
including the unsuccessful one they had just made. Women were more
likely to rate their losing strategy as a poor one.

In one interesting study, a team led by Livia Tomova and Claus Lamm,
of the University of Vienna, found through three experiments that
under stressful conditions, women became more attuned to others. In
one, people reached through a curtain and touched something
pleasant, like a feather or a cotton ball, or something unpleasant,
like a slimy mushroom or a plastic slug. Each person could see a
picture of what he or she was touching, and what another person was
touching a few feet away, and had to rate the pleasantness of their
respective experiences. Typically, people merge the other person's
experience with their own--if I'm touching something pleasant,
then I'll rate your slug-touching experience as nicer than I
ordinarily would.

WHEN women were stressed, however, from having to give a public
speech, they actually found it easier than usual to empathize and
take the other person's perspective. Just the opposite happened for
the stressed men--they became more egocentric. If I'm stroking a
piece of silk, that cow tongue you're touching can't be all that

Of course, just because it works this way in a lab doesn't mean the
same thing happens in the messy real world. Do organizations with
women in charge actually make less risky and more empathetic
decisions in stressful circumstances?

Some evidence suggests they do. Credit Suisse examined almost 2,400
global corporations from 2005 to 2011--including the years
directly preceding and following the financial crisis--and found
that large-cap companies with at least one woman on their boards
outperformed comparable companies with all-male boards by 26

Some might assume that there was a cost to this as well, that boards
with women must have been excessively cautious before the financial
crisis of 2008, as was the case with the balloon experiment. Not so.

From 2005 to 2007, Credit Suisse also found, the stock performance
of companies with women on their boards essentially matched
performance of companies with all-male boards. Nothing lost, but
much gained.

If we want our organizations to make the best decisions, we need to
notice who is deciding and how tightly they're gritting their teeth.

Unfortunately, what often happens is that women are asked to lead
only during periods of intense stress. It's called the glass cliff,
a phenomenon first observed by the University of Exeter professors
Michelle K. Ryan and Alex Haslam, who is now at the University of
Queensland, in which highly qualified women are asked to lead
organizations only in times of crisis. Think of Mary T. Barra at
General Motors and Marissa Mayer at Yahoo, who were both brought in
only after things had begun to fall apart. If more women were key
decision makers, perhaps organizations could respond effectively to
small stresses, rather than letting them escalate into huge ones.

We can't make the big jobs in government or business any less
stressful. But we can ensure that when the pressure rises, there's a
better balance between taking big risks and making real progress.
tt mailing list

Saturday, October 18, 2014

[tt] NYT: David Leonhardt: How Not to Be Fooled by Odds

David Leonhardt: How Not to Be Fooled by Odds

Now that The Upshot puts odds of a Republican takeover of the Senate
at 74 percent, we realize that many people will assume we're
predicting a Republican victory. We're not. There really is a
difference between saying something will almost certainly happen and
saying that it is more likely to happen than not.

We know many people don't buy that, so let me explain. Maybe more
important than any explanation, we have also come up with a list of
outcomes that happen about 26 percent of the time--the flip side
of 74 percent--which gives a sense for how common 26 percent still
is. The list is below.

It's an unavoidable truth that the world is an uncertain place.
Probabilities help us make sense of that uncertainty. When a
situation involves knowable odds and repeated trials, human beings
often feel comfortable with probabilities. If I tell you that the
odds of rolling an eight or lower with two dice are about 74 percent
--72.2 percent to be precise--you get it. You don't think that
I'm predicting you will roll an eight or lower. You won't call me
wrong if you roll an 11. You understand that I am saying you will
roll nine or higher less than 50 percent of the time but more than

The situation becomes more complicated when the odds are less
knowable and the event in question is not an easily repeatable one,
as is the case with an election. An election brings all kinds of
uncertainties. Is the polling in sparsely populated Alaska accurate?
Will Kansans sour on an independent candidate after an initial
flirtation with him? Will a late-breaking development hurt a

Interactive Graphic
Who Will Win the Senate?

The Upshot's state-by-state forecasts are recalculated daily,
combining new polls with other information to estimate the chances
for each party.

Not only are campaigns uncertain but we also can't rerun them, as we
can roll a pair of dice multiple times and see with our own eyes
that 26 percent doesn't equal 0 and doesn't equal 50 percent. When
an election is over, there is only one outcome, and it can come to
seem inevitable--or at least come to seem as if everyone should
have known the outcome all along. If only a few things had broken
differently in 1960, for example, Richard Nixon might have won and
John F. Kennedy might be remembered as an inexperienced loser.

Our forecast is as much a product of history as it is of math. We've
looked at decades' worth of election results and compared them with
the current polls and other data. The combination indicates that the
Republicans are in a strong position but not an invulnerable one.
When candidates have a two- or three-point lead at this stage, as
some do, history suggests they usually win. Put all that information
together, and the Republicans have roughly a three-in-four chance of
winning enough races to take the Senate.

Though we might wish otherwise, we can't make uncertainty disappear.
Traditionally, a lot of analysis and coverage of elections has made
a version of this mistake. It's been too quick to call a race a
"tossup" even when one candidate had a clear, if still modest,
advantage. To say that one candidate was ahead felt too much like
making a prediction that could later look bad. Calling most
campaigns a tossup or a close call was easier and safer.

But when you do take that approach, you're not adding much useful
information to the discussion. The only way to give useful
probabilities--the kind that scientists, business executives and
successful investors like Warren Buffett use in their worlds--is
to give probabilities that will sometimes seem wrong. To be more
specific, a prediction that puts a 74 percent chance on an outcome
should be "wrong" about 26 percent of the time.

Here's that list of situations that occur about 26 percent of the
time (specifically, between 25 and 30 percent of the time). Not all
of these are true probabilities; some are just frequencies. But they
all give a sense for what 26 percent means.

fS The odds of rolling a 9, 10, 11 or 12 with two dice

fS The chances that a blackjack casino dealer busts

fS The percentage of calendar years since World War II that the
Standard & Poor's 500 has declined

fS The share of days in which it rains in Kansas City (as it did
Monday, postponing a baseball playoff game)

fS The frequency with which a 25-year-old woman is shorter than 5
feet 3 inches

fS The frequency with which a 25-year-old man is 5-11 or taller

fS The odds that a National Football League defense prevents a first
down on third-and-one.

fS The percentage of full-time graduate students in electrical
engineering in this country who are American citizens

fS The percentage of mothers with children under 18 who stay home
with their children

fS The share of Americans who live in California, Texas or New York

fS The percentage of subway cars on the C line with dirty seats or
floors, according to the Straphangers Campaign

If the odds of a Republican victory--in our Senate model and
others--remain in the neighborhood of 74 percent, it won't be
right if the Republicans win any more than it will be wrong if the
Democrats keep control. These models should instead be judged over
multiple elections and years: When they say 74 percent, have they
made you smarter about the state of a campaign?
tt mailing list

[tt] NYT: Feminist Critics of Video Games Facing Threats in 'GamerGate' Campaign

Feminist Critics of Video Games Facing Threats in 'GamerGate' Campaign


Anita Sarkeesian, a feminist cultural critic, has for months
received death and rape threats from opponents of her recent work
challenging the stereotypes of women in video games. Bomb threats
for her public talks are now routine. One detractor created a game
in which players can click their mouse to punch an image of her

Not until Tuesday, though, did Ms. Sarkeesian feel compelled to
cancel a speech, planned at Utah State University. The day before,
members of the university administration received an email warning
that a shooting massacre would be carried out at the event. And
under Utah law, she was told, the campus police could not prevent
people with weapons from entering her talk.

"This will be the deadliest school shooting in American history, and
I'm giving you a chance to stop it," said the email, which bore the
moniker Marc Lépine, the name of a man who killed 14 women in a mass
shooting in Montreal in 1989 before taking his own life.

The threats against Ms. Sarkeesian are the most noxious example of a
weekslong campaign to discredit or intimidate outspoken critics of
the male-dominated gaming industry and its culture. The instigators
of the campaign are allied with a broader movement that has rallied
around the Twitter hashtag #GamerGate, a term adopted by those who
see ethical problems among game journalists and political
correctness in their coverage. The more extreme threats, though,
seem to be the work of a much smaller faction and aimed at women.
Major game companies have so far mostly tried to steer clear of the
vitriol, leading to calls for them to intervene.

Have You Experienced Sexism in the Gaming Industry?

The New York Times would like to hear how sexism and gender issues
have affected your experience with gaming.

While the online attacks on women have intensified in the last few
months, the dynamics behind the harassment go back much further.
They arise from larger changes in the video game business that have
redefined the audience for its products, expanding it well beyond
the traditional young, male demographic. They also reflect the
central role games play in the identity of many fans.

"That sense of being marginalized by the rest of society, and that
sense of triumph when you're recognized," said Raph Koster, a
veteran game developer. "Gamers have had that for quite a while."

Continue reading the main story

Mr. Koster has experienced the fury that has long lurked in parts of
the game community. In the late 1990s, when he was the lead designer
for Ultima Online, a pioneering multiplayer web-based game, he
received anonymous hate messages for making seemingly small changes
in the game.

After an electrical fire at his house, someone posted a note on Mr.
Koster's personal website saying he wished the game designer had
died in the blaze.

The malice directed recently at women, though, is more intense,
invigorated by the anonymity of social media and bulletin boards
where groups go to cheer each other on and hatch plans for action.
The atmosphere has become so toxic, say female game critics and
developers, that they are calling on big companies in the
$70-billion-a-year video game business to break their silence.

"Game studios, developers and major publishers need to vocally speak
up against the harassment of women and say this behavior is
unacceptable," Ms. Sarkeesian said in an interview.

Representatives for several major game publishers--Electronic
Arts, Activision Blizzard and Take-Two Interactive Software--
declined to comment.

"Threats of violence and harassment are wrong," the Entertainment
Software Association, the main lobbying group for big game
companies, said in a statement. "They have to stop. There is no
place in the video game community--or our society--for personal
attacks and threats."

On Wednesday, as word of the latest threat against Ms. Sarkeesian
circulated online, the hashtag #StopGamerGate2014 became a trending
topic on Twitter. The term #GamerGate was popularized on the social
media service over the past two months after an actor, Adam Baldwin,
used it to describe what he and others viewed as corruption among
journalists who cover the game industry. People using the term have
been criticizing popular game sites for running articles and opinion
columns sympathetic to feminist critics of the industry, denouncing
them as "social justice warriors."

In a phone interview, Mr. Baldwin, who said he was not an avid gamer
himself but has done voice work for the popular Halo games and
others, said he did not condone the harassment of Mr. Sarkeesian and

"GamerGate distances itself by saying, 'This is not what we're
about,' " said Mr. Baldwin. "We're about ethics in journalism."

While harassment of Ms. Sarkeesian and other women in the video game
business has been an issue for years, it intensified in August when
the former boyfriend of an independent game developer, Zoe Quinn,
wrote a rambling online essay, accusing her of having a relationship
with a video game journalist.

That essay, in turn, fueled threats of violence against Ms. Quinn,
who had designed an unconventional game about depression, and gave
fodder to those suspicious of media bias in the industry. The game
review site Kotaku, which employed the journalist named in the
accusation, said he had not written about her game. Ms. Quinn said
that she had left her home and not returned because of harassment.

And last week an independent game developer in Boston, Brianna Wu,
said she was driven from her home by threats of violence after she
poked fun at supporters of #GamerGate on Twitter. "From the top down
in the video game industry," she said, "you have all these signals
that say, 'This is a space for men.' "

Gaming--or at least who plays video games--is quickly changing,
though. According to the Electronic Software Association, 48 percent
of game players in the United States are women, a figure that has
grown as new opportunities to play games through mobile devices,
social networks and other avenues have proliferated. Game
developers, however, continue to be mostly male: In a survey
conducted earlier this year by the International Game Developers
Association, a nonprofit association for game developers, only 21
percent of respondents said they were female.

Still, game companies have made some progress in their depiction of
women in games, said Kate Edwards, the executive director of the
association, who works with companies to discourage them from
employing racial and sexual stereotypes in their games. A game
character she praises is the new version of Lara Croft, the heroine
of the Tomb Raider series who once epitomized the exaggerated, busty
stereotype of a female game protagonist. The new Lara Croft is more
emotionally complex and modestly proportioned.

Ms. Edwards said changes in games and the audience around them have
been difficult for some gamers to accept.

"The entire world around them has changed," she said. "Whether they
realize it or not, they're no longer special in that way. Everyone
is playing games."

[tt] NYT Daily Book Review of Atul Gawande: Being Mortal

Daily Book Review of Atul Gawande: Being Mortal

A Prescription for Life's Final Stretch

Medicine and What Matters in the End
By Atul Gawande
282 pages. Metropolitan Books/Henry Holt & Company. $26.

Atul Gawande's "Being Mortal: Medicine and What Matters in the End"
introduces its author as a myopically confident medical school
student whose seminar in doctor-patient interaction spent an hour on
Tolstoy's novella "The Death of Ivan Ilyich." As a young man, he was
not ready to understand the title character's loneliness, suffering
and desire to be pitied. He saw medical compassion as a given and
Ivan Ilyich's condition as something modern medicine could probably
cure. He and his fellow students cared about acquiring knowledge and
competence. They did not see mortality as part of the medical

Now a surgeon (and rightfully popular author) in his 40s, Dr.
Gawande sees why that story was part of his training. "I never
expected that among the most meaningful experiences I'd have as a
doctor--and, really, as a human being--would come from helping
others deal with what medicine cannot do as well as what it can," he

"Being Mortal" uses a clear, illuminating style to describe the
medical facts and cases that have brought him to that understanding.
He begins with an anecdote that illustrates how wrong doctors can be
if they let their hubris and fear of straight talk meld with a
patient's blind determination to fight on, no matter what. "Don't
you give up on me," demands a man with cancer, though the surgery he
wants cannot possibly cure him. "He was pursuing little more than a
fantasy at the risk of a prolonged and terrible death--which was
precisely what he got," Dr. Gawande writes.

Such things happen because modern death-delaying techniques are
relatively new in medicine. Which patients have long-term
life-threatening conditions and which are really at death's door? In
what Dr. Gawande calls "an era in which the relationship between
patient and doctor is increasingly miscast in retail terms," how
easy is it for doctors--trained to solve problems and succeed--
to acknowledge that there's no cure to be had? How many doctors,
used to telling their patients how to live, are ready to talk to
them about how to die?

Dr. Gawande's early description of how the body decays with age is
nothing if not sobering. It's one thing to know that arteries
harden; it's another to learn that he, as a surgeon, has encountered
aortas so calcified that they crunch. And so it goes with this
book's thorough litany of body parts, from the news that an elderly
person's shrinking brain can actually be knocked around inside his
or her skull to the way a tooth can determine a person's age, give
or take five years. Eat and exercise however you want, tell everyone
how old your grandparents lived to be: According to "Being Mortal,"
none of these factors do much to slow the march of time.

So a lot of the book is devoted to subjects generally unmentionable,
like geriatrics. "When the prevailing fantasy is that we can be
ageless, the geriatrician's uncomfortable demand is that we accept
we are not," he writes. And the number of doctors willing to become
geriatricians is shrinking, partly because the field is not as
lucrative as, say, plastic surgery, and partly because it provides
so little instant satisfaction, and requires such work as a
detailed, lengthy examination of callused old feet.

"Mainstream doctors are turned off by geriatrics, and that's because
they do not have the faculties to cope with the Old Crock," says Dr.
Felix Silverstone, a specialist in the field. To summarize: This
hypothetical Old Crock is deaf and forgetful, can't see, has trouble
understanding what the doctor says and has no one chief complaint;
he has 15 of them. He has high blood pressure, diabetes and
arthritis. "There's nothing glamorous about taking care of any of
those things."

But patients who receive good geriatric care stay happier and
healthier, just as old people who can remain at home and aren't
forced into nursing homes are better able to enjoy their lives. This
book makes a thorough inquiry into how the idea of the
assisted-living facility arose as a supposed improvement on
regimented nursing homes but has often become a disheartening place
for independent-minded people to have to go. The all-important
quality-of-life issue that is used to market such places, Dr.
Gawande maintains, is directed more toward the people planning to
leave Mom then than toward Mom herself. But he sees a lot of hope in
the group living concept, if it is overseen with the residents'
happiness in mind.

The toughest stories in the book are, of course, the terminal ones.
And Dr. Gawande gives an agonizing account of how his own father,
also a surgeon, gradually lost control of his body, even while
understanding exactly what was happening to him. He writes of his
family's ordeal in facing the reality of this downhill slide, and of
his own particular helplessness as a doctor. He captures the
inevitable physical intimacy that comes with death, which is perhaps
the strangest shock to a culture that has used hospitals and nursing
facilities to isolate the dying from the healthy in ways that
earlier generations never could.

Last and hardly least, Dr. Gawande describes some of his toughest
cases, including that of a pregnant 34-year-old with terminal cancer
(a tough fighter facing a heartbreaking situation) and a woman whose
abdominal troubles prove far more awful than anything the doctor
anticipated. By then, he has made a subtle but all-important change
in how he answers patients' terrified questions. Asked "Am I going
to die?" his answer could be: "No, no, no. Of course not." But he
learns to say, "I am worried." That's a way of being honest, serious
and empathetic, showing he is wholly on the patient's side. It won't
work miracles. But it's the best a doctor can do.
tt mailing list

[tt] NS 2990: Let science decide the voting age

NS 2990: Let science decide the voting age
* 14 October 2014 by Laurence Steinberg

Laurence Steinberg is professor of psychology at Temple University
in Pennsylvania. His new book is Age of Opportunity: Lessons from
the new science of adolescence (Eamon Dolan)

Research on the adolescent brain can help us decide whether
16-year-olds should have the vote

A WAVE of interest in lowering the voting age is sweeping the UK,
catalysed by Scotland's recent independence referendum. For the
first time in the country, the ballot was open to people aged 16 and
17. Other countries watched this political experiment closely and
are looking afresh at their voting age limits - but politics, not
evidence, is generally driving the discussion.

Societies have long had a raft of legal boundaries between
adolescence and adulthood, decreeing at what age we can or cannot do
things such as drive, drink alcohol or vote. In most countries, 18
is the age of majority for most purposes, but there are all sorts of

The US has probably the most diverse and logically inconsistent
array of limits. You can drive at 16 but not drink alcohol until 21.
In between those two extremes fall the ages at which you can see a
movie intended for adults, enter into a legally binding contract, or
buy cigarettes. Moreover, in most US states 12-year-olds who commit
a serious violent crime can be tried as adults, because they are
viewed as "old enough to know better", whereas a 25-year-old with a
clean driving record might not be able to rent a car without paying
an "immaturity premium".

Sentiment, be it political or public, tends to dictate these limits,
with little regard for what we know about the psychological maturity
of young people.

Which begs the question: can the age of majority be better
determined by paying heed to science? Advances in the study of brain
development have greatly furthered our understanding of how, why,
and in what ways intellectual capabilities change during the
transition from adolescence to adulthood. Let us apply this to the
issue at hand.

Over about 40 years - not long in the grand scheme of things - the
voting age in Scotland has dropped from 21 to 16, although 16
applied only in the case of the independence referendum. From the
perspective of brain science, is one of these ages a wiser choice
than the other, or should the UK continue to split the difference at
18? And if the voting age were lowered to 16 across the UK, should
other boundaries be eased as well? In the 1970s, for example, the US
lowered the age of alcohol consumption when the voting age fell to
18 (although most states later reverted the drinking age to 21 amid
drink-driving concerns).

Research on adolescent brain development does not point to an
obvious age at which a sharp legal distinction between adolescents
and adults should be drawn for all purposes, but it is very
informative. People reach various kinds of maturity between the ages
of roughly 15 and 22. Adolescents' judgement in situations that
permit unhurried decision-making and consultation with others - what
psychologists call "cold cognition" - is likely to be as mature as
that of adults by 16. In contrast, adolescents' judgement in
situations characterised by heightened emotions, time pressure or
the potential for social coercion - "hot cognition" - is unlikely to
be as mature as that of adults until they are older, certainly no
younger than 18 and perhaps not until they are 21. This distinction
is partly related to our understanding of changes in the brain's
prefrontal cortex, which usually continue for the first 20 years of

If science were an important consideration in establishing the age
of legal adulthood, as I believe it should be, a reasonable starting
point would be to distinguish between regulations for activities
that involve cold cognition and those in which hot cognition is

Cold cognition is relevant to matters such as voting, granting
informed consent for medical procedures or taking part in a
scientific study, and competence to stand trial in court. In these,
adolescents can gather evidence, consult advisers (such as parents,
physicians or lawyers), and take time before making a decision. Time
pressure and peer pressure aren't usually factors.

I see no reason why a pregnant 16-year-old, given adequate time and
the opportunity to discuss the decision with an adult, shouldn't be
able to get an abortion or contraception without her parents'
involvement, or why we shouldn't let 16-year-olds vote. Indeed, they
can vote in Austria, Argentina, Brazil, Ecuador, and Nicaragua.

I certainly wouldn't recommend changing the age limit to 16 for all
purposes, though. A later threshold is more sensible for matters
that involve hot cognition, such as driving, drinking and criminal
responsibility. Here the circumstances are usually those that bring
out the worst in adolescents' judgement. They frequently pit the
temptation of immediate rewards against the prudent consideration of
long-term costs, occur against a backdrop of high emotion, and are
influenced by other adolescents.

These are the very conditions under which adolescent decision-making
is more impulsive, more risky and more myopic than that of adults.
Given this, we ought to set the minimum driving age and the minimum
age of adult criminal responsibility at 18, and continue to restrict
minors' access to alcohol, tobacco and, where it is legal,

Science cannot be the only consideration in drawing legal
boundaries, to be sure, but it ought to play a role in these
discussions. I don't harbour any delusions about the use of
scientific evidence to inform policy-making, though. If the
political will is absent, no amount of science, no matter how
persuasive, will change the law.

Politicians and advocacy groups use science in the way that drunks
use lampposts - for support, not illumination. That quip,
ironically, originated from the pen of one Andrew Lang, a poet,
scholar and son of Scotland.
tt mailing list