Thursday, September 3, 2015

[tt] NS 3036: Leader: Smart machines may discover things we can't, but we still matter

NS 3036: Leader: Smart machines may discover things we can't, but we still

ANYTHING you can do, a machine can do better. Well, not really, but
that's what it has felt like lately. Rapid advances in "deep
learning" have led to computers whose abilities rival or exceed
ours, in tasks ranging from describing pictures to driving (page

Cue a wave of anxiety about the potential for automatons to displace
us not just on production lines, but also in white-collar and
creative work. Some pundits are even warning that employment as we
know it will cease to exist.

Perhaps. Right now, computers are mostly augmenting human smarts -
for example, in medical diagnosis (page 18). Or solving problems
without being limited by the human mind, allowing them to invent
novel gadgets or even push back the frontiers of mathematics (page

Will machines one day supplant thinkers and tinkerers altogether?
Even where they outperform us, their discoveries will only be useful
if we can make sense of them and apply them.

We already use tools that do things no human could, and whose
workings no one person understands. In those respects, "thinking"
machines are not so different from, say, the LHC - tools, or perhaps
partners. At least until they start to have questions and desires of
their own...
tt mailing list

[tt] NS 3036: Leader: It's not too late to reclaim our online privacy

NS 3036: Leader: It's not too late to reclaim our online privacy
26 August 2015

PRIVACY is dead. So opined many last week, after hackers leaked
personal information about some 35 million users of Ashley Madison,
a website designed to facilitate extra-marital affairs. Humiliation
on a massive scale followed. The fallout - including lawsuits,
extortion attempts and, reportedly, suicides - is unlike that of any
previous hack.

Don't be distracted by whatever opinions you may have about the
morality of the site and its users. The real point is that ordinary
people have now become targets of the extreme violations of privacy
usually directed at public figures, fuelled by a toxic mix of
prurience and schadenfreude - and on unprecedented scales.

How did we get here? Data has become currency: we barter it for
services from operating systems to music players, while accepting
promises of personalisation and assurances of security from those to
whom we entrust it.

That trust is misplaced. Silicon Valley is built on data trading,
and its products reflect that. Webmail isn't encrypted; that would
stop lucrative ads. Apps don't tell us what they're doing.
Ad-trackers stalk you as you browse. And instead of real security,
we are exhorted to strengthen our passwords - which is unintuitive
and largely futile.

This isn't how things have to be. Researchers are testing systems
that would return control over our data to us (see "Your data, your
rules"). Hacking would still be a risk, but there would be fewer
enticing targets. Such systems are complex, but so are, say,
spam-filtering and online shopping. Given a will to use them, we
would find a way.

Does that will exist? Not among the social media titans. Start-ups
focused on privacy haven't caught on so far. Perhaps it is more
likely to come from a grass-roots effort. That can be effective:
ad-blocking extensions for web browsers, built by volunteers, are
now used by nearly 200 million people. But if the effort doesn't
start soon, vested interests may become too deeply entrenched to
overturn. And that really might kill off privacy for good.
tt mailing list

[tt] NS 3036: Why being bored is stimulating - and useful, too

NS 3036: Why being bored is stimulating - and useful, too
26 August 2015

Why being bored is stimulating - and useful, too

AS I sit, trying to concentrate, my toes are being very gently
nibbled. It's my dog, Jango, an intelligent working breed, and he's
telling me that he is bored. I know from experience that if I don't
take him out right now, or at least find him a toy, he will either
pull my socks off and run away with them, or start barking like a
beast possessed.

His cousins in the wild don't seem to suffer the same problem.
Coyotes spend 90 per cent of their time apparently doing nothing,
but never seem to get fed up, according to Marc Bekoff at the
University of Colorado in Boulder, who has studied them for years.
"They might be lying down but their eyes are moving and their heads
are moving and they are constantly vigilant," he says. Trapped
indoors, Jango has little to be vigilant about, and a lot of spare
mental capacity. Bored office workers everywhere will know the

"Boredom is the dream bird that hatches the egg of experience"
Walter Benjamin

We tend to think of boredom as a price we pay for being intelligent
and self-aware. Clearly we aren't the only species to suffer. Yet,
given how common this emotion is in daily life, it's surprising how
little attention it has received. Now that is changing and, as
interest increases, researchers are addressing some fascinating
questions. What exactly is boredom? Why are some people more prone
to it than others? What is it for? Is it a good or bad thing? And
what can we do to resist it when it strikes? Some of the answers are
hotly contested - boredom, it turns out, is really rather

Like other emotions, boredom didn't just arise spontaneously when
humans came on the scene. Many creatures, including mammals, birds
and even some reptiles, seem to have a version of it, suggesting
that there is some kind of survival advantage to feeling bored. The
most plausible explanation is that it serves as a motivator. Boredom
could have evolved as a kind of kick up the backside, suggests
animal psychologist Francoise Wemelsfelder at the Scottish
Agricultural College in Edinburgh, UK. "If a wild animal has done
nothing for a while there is a lot of evidence that it will go out
to look for things to do, and there is definitely survival value in
that," she says. It will know, for example, that an escape route is
blocked, because it has explored its territory.

Where boredom stops being useful and starts becoming a problem is
when the desire to explore is thwarted. "All animals want and need
to engage with the environment," says Wemelsfelder. That's why they
get bored if you put them in a plain wire cage and, if left there,
may end up exhibiting strange behaviours such as pacing in a figure
of eight or pulling out their own feathers. "Even if they don't sit
there thinking, 'damn I'm bored', I still think they suffer," she

"The cure for boredom is curiosity. There is no cure for curiosity"
Dorothy Parker

Human boredom may be more complex, but there are parallels. In his
book, Boredom: A lively history, Peter Toohey at the University of
Calgary, Canada, compares it to disgust - an emotion that motivates
us to avoid certain situations. "If disgust protects humans from
infection, boredom may protect them from 'infectious' social
situations," he suggests. And, as with other animals, boredom seems
to occur when we feel physically or mentally trapped. One study, for
example, found that people given no choice but to participate in a
dull activity in the lab reported that time dragged, and rated the
task as more boring than those who had chosen to participate.

We all know how it feels - it's impossible to concentrate, time
stretches out, a fog descends and all the things you could do seem
equally unlikely to make you feel better. But defining boredom so
that it can be studied in the lab has proved difficult. For a start,
it isn't simply about aversion or feeling trapped, but can include a
lot of other mental states, such as frustration, apathy, depression,
indifference and surfeit. There isn't even agreement over whether
boredom is always a low-energy, flat kind of emotion or whether
feeling agitated and restless counts as boredom, too.

Thomas Goetz at the University of Konstanz in Germany suspects it
can be all of these things. By asking people about their experiences
of boredom, he and his team have recently identified five different
types: indifferent, calibrating, searching, reactant and apathetic
(see "What's your boredom style?" below). These can be plotted on
two axes - one running left to right, which measures low to high
arousal, and the other from top to bottom, which measures how
positive or negative the feeling is. Intriguingly, Goetz has found
that while people experience all kinds of boredom, and might flit
from one to another in a given situation, they tend to specialise in
one. However, it remains to be seen whether there are any character
traits that predict the kind of boredom each of us might be prone

Of the five types, the most damaging is "reactant" boredom with its
explosive combination of high arousal and negative emotion, which
adds up to a restless, angry person in need of an outlet. The most
useful is what Goetz calls indifferent boredom: someone isn't
engaged in anything satisfying but neither are they particularly fed
up, and actually feel relaxed and calm. He believes that in the
right circumstance this type of boredom can be a positive
experience. "If you have a hard day and in the evening you go to a
class, it might be boring but it's OK to be bored because you had a
stressful day. Time feels like its standing still, but it's not too
bad," he says.

Psychologist Sandi Mann at the University of Central Lancashire, UK,
goes further. She believes this positive kind of boredom and, to
some extent all kinds, can be good for us. "All emotions are there
for a reason, including boredom," she says. As well as motivating us
to do something more interesting, Mann has found that being bored
makes us more creative. "We're all afraid of being bored but in
actual fact it can open our minds, it can lead to all kinds of
amazing things," she says.

"The two enemies of human happiness are pain and boredom"
Arthur Schopenhauer

In experiments published last year Mann found that people who had
been made to feel bored by copying numbers out of the phone book for
15 minutes came up with more creative ideas about how to use a
polystyrene cup than a control group who had gone straight to the
cup problem. People who just read the phone book for 15 minutes were
more creative still. Mann concluded that a passive, boring activity
is best for creativity because it allows the mind to wander. In
fact, she goes so far as to suggest that we should seek out more
boredom in our lives.

Psychologist John Eastwood at York University in Toronto, Canada,
isn't convinced. "If you are in a state of mind-wandering you are
not bored," he says. "In my view, by definition boredom is aversive,
it's an undesirable state." That doesn't necessarily mean that it
isn't adaptive, he adds. "Pain is adaptive - if we didn't have
physical pain, bad things would happen to us. Does that mean that we
should actively cause pain? No." In other words, even if boredom has
evolved to help us survive, it can still be toxic if allowed to
fester. "All emotions tell us how we are in the world. Boredom tells
us we have pent-up, unused potential and desire to connect to the
world. Then the question is what do I do to cope with that
situation?" Eastwood says.

Eastwood is interested in what boredom actually is - and his model
highlights why it can be so difficult to cope with. For him, the
central feature is a failure to switch our attention system into
gear. The problem isn't so much a lack of stimulating things to do,
but trouble focusing on anything. With nothing to focus your
attention away from the passage of time, it seems to go painfully
slowly. What's more, your efforts to rectify the situation can end
up making you feel worse. "People try to connect with the world and
if they are not successful there's that frustration and
irritability," he says. "Then they fall back into lethargy and if
that doesn't work they get aroused again, so there's an oscillation
between the under and over-arousal states in an attempt to resolve
the problem." Perhaps most worryingly, says Eastwood, repeatedly
failing to engage attention can lead to a state where we don't know
what to do any more, and no longer care.

Eastwood's group is now exploring why the attention system fails.
It's early days but they think that at least some of it comes down
to personality. Boredom proneness has been linked with a variety of
traits (see "Do boring people get bored?"). People who are motivated
by pleasure - the sensation-seeking stimulation junkies - seem to
suffer particularly badly, as do anxious types. Other personality
traits, such as curiosity and self-control, are associated with a
high boredom threshold. What Eastwood's team would like to know is
why the attention system is prone to fail in some types of people
more than others, what this suggests about the neuroscience of
attention failure, and whether this can tell us anything about why
some people experience boredom more than others.

Bored to death?

Whatever its cause, a failure to focus might help explain why
boredom feels bad. Psychologists Matthew Killingsworth and Daniel
Gilbert at Harvard University used a smartphone app to interrupt
people at random intervals to ask them if they were on-task and how
happy they felt. It turned out that the unhappiest people were those
who were least focused on what they were supposed to be doing.

"Boredom is the root of all evil - the despairing refusal to be
Soren Kierkegaard

More evidence that boredom has detrimental effects comes from
studies of people who are more or less prone to boredom. It seems
those who bore easily face poorer prospects in education, their
career and even life in general. They are also more likely to have
problems with anger and aggression, and to partake in risky
behaviours such as alcohol and drug abuse and gambling. One study
even seemed to suggest that it's possible to be bored to death.
Researchers from University College London looked at self-rated
boredom levels in civil servants in 1985. When they followed them up
in 2009, they found those who had been consistently bored were
significantly more likely to die early.

"Boredom: the desire for desires"
Leo Tolstoy

Of course, boredom itself cannot kill, it's the things we do to deal
with it that may put us in danger. What can we do to alleviate it
before it comes to that? Goetz's group has one suggestion. Working
with teenagers, they found that those who "approach" a boring
situation - in other words, see that it's boring and get stuck in
anyway - report less boredom than those who try to cope by avoiding
it and mucking around. So when boredom strikes, distracting yourself
from the feeling with snacks, TV or social media probably isn't the
best strategy.

In fact our techno-loaded, overstimulated lives might be part of the
problem. Mann believes that with so many distractions we are
neglecting our ability to daydream. "We have this inbuilt mechanism
to cope with boredom, but we're not using it," she says.
Wemelsfelder speculates that our overconnected lifestyles might even
be a new source of boredom. "In modern human society there is a lot
of overstimulation but still a lot of problems finding meaning," she
says. So instead of seeking yet more mental stimulation, perhaps we
should leave our phones alone, listen to the boredom and use it to
motivate us to engage with the world in a more meaningful way.

If that sounds too hard, technology itself might provide an answer
in the future. Sidney D'Mello at Notre Dame University in Indiana is
working on a computer-based tutor for use in schools. By tracking
eye position and body posture, it can tell when a person is getting
bored, and will adjust its instructions accordingly. It's not
difficult to imagine a similar program sitting on every office
desktop, waiting to cajole you back into action. Ironically, it
might turn out to be one kind of techno-distraction that we find
incredibly easy to turn off.

Do boring people get bored?

It is sometimes said that only boring people get bored. That is
almost certainly unfair, but some people clearly suffer more than
others. The standard way to measure a person's propensity to boredom
is the boredom proneness scale (BPS), first published in 1986 by
Richard Farmer and Norman Sundberg of the University of Oregon (you
can take the test below).


So who is easily bored? Studies using the BPS indicate that men get
bored more than women, that extroverts are prone, as are people with
narcissistic personality traits, anxious types and those who lack
self-awareness. Highly competitive sorts who are also
sensation-seekers are particularly prone, which has led some to
suggest a link between boredom and a heightened desire for
stimulation caused by low levels of the "pleasure" neurotransmitter
dopamine. On the other hand, creative people and those with a higher
need for mental stimulation seem to be protected from boredom to
some extent, perhaps because they do better at finding some interest
or meaning in whatever they have to do.

But it's not just about your boredom threshold; how intensely you
experience boredom also matters. "You might not score high on
boredom proneness but in the moment, you might still be really
bored," says John Eastwood at York University in Toronto, Canada. He
prefers to see boredom as a state, rather than a trait, and has
developed his own test - the multidimensional state boredom scale -
to measure how it feels in the here and now.

Nothing evokes this state quite as well as feeling trapped in a
situation where you have no control over your choices. So it's not
about being boring. "Only captive people get bored" might be a more
accurate statement.

You might be interested to know...

6 hours: An average Briton is bored each week
online survey

73%: People who believe boredom can be positive
Journal of Applied Social Psychology, vol 30, p 576

1 in 10: People who claim to never be bored
Journal of Applied Social Psychology, vol 30, p 576

I've taught computers to paint portraits - and how to code
tt mailing list

[tt] NS 3036: How creative computers will dream up things we'd never imagine

NS 3036: How creative computers will dream up things we'd never imagine
26 August 2015

ON A summer's day in 1899, a bicycle mechanic in Dayton, Ohio, slid
a new inner tube out of its box and handed it to a customer. The
pair chatted and the mechanic toyed idly with the empty box,
twisting it back and forth. As he did so, he noticed the way the top
of the box distorted in a smooth, spiral curve. It was a trivial
observation - but one that would change the world.

The shape of the box just happened to remind the mechanic of a
pigeon's wing in flight. Watching that box flex in his hands, Wilbur
Wright saw how simply twisting the frame supporting a biplane's
wings would give him a way to control an aircraft in the air.

Serendipity and invention go hand in hand. The Wright brothers'
plane is just one of many examples. Take velcro: George de Mestral
invented the material after he noticed the hook-covered seeds of the
burdock plant sticking to his dog. And Harry Coover's liquid plastic
concoction failed miserably as a material for cockpit canopies, as
it stuck to everything. But it had a better use: superglue.

It may be romantic, but it is an achingly slow way to advance
technology. Relying on happenstance means inventions that could be
made today might not appear for years. "The way inventions are
created is hugely archaic and inefficient," says Julian Nolan, CEO
of Iprova, a company based in Lausanne, Switzerland, which
specialises in generating inventions. Nothing has changed for
hundreds of years, he says. "That's totally out of sync with most
other industries."

But we are starting to make our own luck. Those eureka moments could
soon be dialled up on demand as leaps of imagination are replaced by
the steady steps of software. From algorithms that mimic nature's
way of producing the best designs to systems that look for gaps
between existing patented technologies that new designs might fill,
computer-assisted invention is here.

The impact could be huge. Some claim automated invention will speed
up technological progress. It could also level the playing field,
making inventors of us all. But what happens if the currency of
ideas is devalued? To qualify for a patent, for example, an idea
can't be "obvious". How does that apply when ideas are found by
brute force?

The first group to mimic evolution in patent design - pioneering the
use of so-called genetic algorithms (see "As nature intended") - was
led by John Koza at Stanford University in California in the 1990s.
The team tested their algorithms by seeing if they could reinvent
some of the staples of electronic design: the early filters,
amplifiers and feedback control systems developed at Bell Labs in
the 1920s and 1930s. They succeeded. "We were able to reinvent all
the classic Bell Labs circuits," says Koza. "Had these techniques
been around at the time, the circuits could have been created by
genetic algorithms."

In case that was a fluke, the team tried the same trick with six
patented eyepiece lens arrangements used in various optical devices.
The algorithm not only reproduced all the optical systems, but in
some cases improved on the originals in ways that could be patented.

The versatility of this type of algorithm is clear from the showcase
of evolved inventions at the annual Genetic and Evolutionary
Computation Conference (GECCO). Innovations at this year's event
included efficient swimming gaits for a four-tentacled, octopus-like
underwater drone - evolved by a team at the BioRobotics Institute in
Pisa, Italy - and the most fuel-efficient route for a future space
probe to clean up low-Earth orbits. Engineers at the European Space
Agency's advanced concepts lab in Noordwijk, the Netherlands,
treated the task like a cosmic version of the famous travelling
salesman problem - but instead of cities, their probe visits
derelict satellites and dead rocket bodies to nudge them out of

However, the big prize at GECCO is the human competitiveness award,
or "Humie", for inventions deemed to compete with human ingenuity.
The first Humie, in 2004, was awarded for an odd-shaped antenna,
evolved for a NASA-funded project. It worked brilliantly even though
it looked like a weedy sapling, with a handful of awkwardly angled
branches, rather than a regular stick-like antenna. It certainly
wasn't something a human designer would produce.

That is often the point. "When computers are used to automate the
process of inventing, they aren't blinded by the preconceived
notions of human inventors," says Robert Plotkin, a patent lawyer in
Burlington, Massachusetts. "So they can produce designs that a human
would never dream of."

This year's Humie winner was a way to improve the accuracy of
super-low-power computers. So-called approximate computers are built
from simple logic circuitry that consumes very little power but can
make a lot of mistakes. By evolving smart software routines for such
computers, Zdenek Vasicek at Brno University of Technology in the
Czech Republic was able to correct many of the errors introduced by
the simple design. The result is a greener chip for use in
applications where computational exactness doesn't matter, like
streaming music or video.

There's just one problem with using genetic algorithms: you need to
know in advance what you want to invent so that your algorithm can
modify it in fruitful ways. "Genetic algorithms work well when you
already know all the relevant features and can vary them until you
get a solution that satisfies all your fitness constraints," says
Tony McCaffrey, chief technology officer of Innovation Accelerator
based in Natick, Massachusetts. Nolan agrees: "Genetic algorithms
tend to be good at optimising pre-existing inventions but typically
not ones of great commercial value." That's because they don't take
big, inventive steps, he says, and so have less chance of making a
commercially valuable hit.

Innovation Accelerator's approach is to use software to help
inventors notice easily missed features of a problem that, if
addressed, could lead to a novel invention. "An invention is
something new that was not invented before because people overlooked
at least one thing that the inventor noticed," says McCaffrey. "If
we can get people to notice more obscure features of a problem, we
raise the chances that they will notice the key features needed to
solve the problem."

To do that, the firm has written software that lets you describe a
problem in human language. It then "explodes" the problem into a
large number of related phrases and uses these to search the US
Patent and Trademark Office database for inventions that solve
similar problems. But similar is the operative word, says McCaffrey.
The system is designed to look for analogues to the problem in other
domains. In other words, the software does your lateral thinking for

In one example, McCaffrey asked the system to come up with a way to
reduce concussion among American football players. The software
exploded the description of the problem and searched for ways to
reduce energy, absorb energy, exchange forces, lessen momentum,
oppose force, alter direction and repel energy. Results for how to
repel energy led the firm to invent a helmet that contained strong
magnets to repel other players' helmets, lessening the impact of
head clashes. Unfortunately, someone else beat them to the patent
office by a few weeks. But it proved the principle, says McCaffrey.

In another case, the software duplicated a ski-maker's recent
innovation. The problem was to find a way to stop skis vibrating so
skiers could go faster and turn more safely. The manufacturer
eventually stumbled upon an answer, but Innovation Accelerator's
software was able to find it quickly. "A violin builder had a method
to produce purer music by reducing vibrations in the instrument,"
says McCaffrey. "The method was applied to the skis and made them
vibrate less."

"Ninety per cent of problems have already been solved in some other
field," says McCaffrey. "You just have to find them." He now plans
to use IBM's supercomputer Watson, which draws inferences from
millions of documents, to help his system understand patents and
technical papers far more deeply.

The technology at Nolan's firm, Iprova, also helps inventors to
think laterally - but with ideas derived from sources far beyond
patent documents. The company is unwilling to reveal exactly how its
Computer Accelerated Invention technique works, but in a 2013
patent, Iprova says it provides clients with "suggested innovation
opportunities" by interrogating not only patent databases and
technical journals, but also blogs, online news sites and social

Of particular interest is the fact that it alters its suggestions as
tech trends on the internet change. The result seems to be extremely
productive. "We use our technology to create hundreds of
high-quality inventions per month, which we then communicate to our
customers," says Nolan. "They can then choose to patent them." If
their wide range of customers in the healthcare, automotive and
telecommunications industries is anything to go by, Iprova appears
to have hit paydirt. One of its clients is Philips, a major
technology multinational. Such firms don't add outside expertise to
their R&D teams lightly.

All this means that algorithm-led discovery is likely to be the most
productive inventing process of the future. "Human inventors who
learn to leverage computer-automated innovation will leapfrog peers
who continue to invent the old-fashioned way," says Plotkin.

"Human inventors who use automated innovation will leapfrog those
who don't"

But where do we draw the line between the two? "I don't think there
is a clear separation between human and algorithm," says Eric
Bonabeau, founder of Icosystem, a company based in Cambridge,
Massachusetts. "The key is to find the right division of labour."
Icosystem uses genetic algorithms to optimise everything from
inventions to business processes - an approach Bonabeau calls
"enhanced serendipity".

However, if the division of labour is too much on the computer's
side, it could undermine the patent system itself. Currently a
"person having ordinary skill in the art" must believe that an
invention isn't obvious if it is to be granted a patent. But if
inventors are only tending a computer, the inventions that arise
could be deemed an obvious output of that computer, like hot water
from a kettle.

These concerns have already been raised with drug discovery, says
Gregory Aharonian, a consultant based in San Francisco, who
specialises in patents. "If drug discovery tools become so powerful
that a researcher is just overseeing the tools' activity, does that
make the whole process obvious and so not patentable? Industry could
be shooting itself in the foot by developing such technology."

Another concern is that broad access to smart invention tools could
speed up human technological development. Making the resulting
gadgets may consume Earth's resources all the quicker. McCaffrey is
more optimistic. "I am really impressed with engineers who are
creating ways to improve housing, food storage, crop growth, water
purification and transportation in the developing world," he says.
"I sincerely hope we use this emerging invention assistance
technology to address the really important problems faced by

Chance favours the prepared mind. If Wilbur Wright hadn't been
thinking about his problem, he may never have had his eureka moment.
"Automating that amounts to making accidental encounters orders of
magnitude more efficient," says Bonabeau. "In other words, outsource
serendipity to the algorithm."

As nature intended

Genetic algorithms tackle the problem of design by mimicking natural
selection. Desired characteristics are described as if they were a
genome, where genes represent parameters such as voltages, focal
lengths, or material densities, say.

The process starts with a more or less random sample of such
genomes, each a possible, albeit suboptimal, design. By combining
parent genomes from this initial gene pool - and introducing
"mutations" - offspring are created with features of each parent
plus potentially beneficial new traits. The fitness of the offspring
for a given task is tested in a simulation. The best are selected
and become the gene pool for the next round of breeding. This
process is repeated again and again until, as with natural
selection, the fittest design survives (see diagram below).

As well as evolving new designs, evolutionary algorithms can be used
to evolve "parasites" that inflict maximal damage to test safety or
security features. "Nature has been very good and very creative at
finding loopholes in every possible complex system," says Eric
Bonabeau of Icosystem of Cambridge, Massachusetts, who has used this
technique to improve the design of ships for the US navy.

By Paul Marks
Paul Marks is a freelance journalist based in London
tt mailing list

[tt] NS 3036: Our number's up: Machines will do maths we'll never understand

NS 3036: Our number's up: Machines will do maths we'll never understand
26 August 2015

AFTER three years, Shinichi Mochizuki is still waiting. In 2012, the
highly respected mathematician at Kyoto University in Japan
published more than 500 pages of dense maths on his website. It was
the culmination of years of work. Mochizuki's inter-universal
Teichmüller theory described previously uncharted areas of the
mathematical realm and let him prove a long-standing conundrum about
the true nature of numbers, known as the ABC conjecture. Other
mathematicians hailed the result, but warned it would take a lot of
effort to check. Months passed, then years, with no conclusion.

Ask a mathematician what a proof is and they're likely to tell you
it must be absolute - an exhaustive sequence of logical steps
leading from an established starting point to an undeniable
conclusion. But that's not the whole story. You can't just publish
something you believe is true and move on; you have to convince
others that you haven't made any mistakes. For a truly
groundbreaking proof, this can be a frustrating experience.

It turns out that very few mathematicians are willing to put aside
their own work and dedicate the months or even years it would take
to understand a proof like Mochizuki's. And as maths becomes
increasingly fractured into subfields within subfields, the problem
is set to get worse. Some think maths is reaching a limit. Real
breakthroughs can be too complicated for others to check, so many
mathematicians occupy themselves with more attainable but arguably
less significant problems. What's to be done?

For some, the solution lies in employing digital help. A lot of
mathematicians already work alongside computers - they can help
check proofs and free up time for more creative work. But it might
mean changing how maths is done. What's more, computers may one day
make genuine breakthroughs on their own. Will we be able to keep up?
And what does it mean for maths if we can't?

The first major computer-assisted proof was published 40 years ago
and it immediately sparked a row. It was a solution to the
four-colour theorem, a puzzle dating back to the mid-19th century.
The theorem states that all maps need only four colours to make sure
no adjacent regions are coloured the same. You can try it as many
times as you like and find it to be true (print out our puzzle to
have a go). But to prove it, you need to rule out the very
possibility of there being a bizarre map that bucks the trend.

In 1976, Kenneth Appel and Wolfgang Haken did just that. They showed
you could narrow the problem down to 1936 sub-arrangements that
might require five colours. They then used a computer to check each
of these potential counterexamples, and found that all could indeed
be coloured with just four colours.

Job done, or so you'd think. "Mathematicians were reluctant to
accept this as a proof," says Xavier Leroy at the Institute for
Research in Computer Science and Automation in Paris, France. What
if there was an error in the code? "They said: 'We're not going to
recheck your thousand particular cases by hand, we don't trust your
program, and that's not a real proof'."

They had a point. Checking software that tests a mathematical
conjecture can be harder than proving it the traditional way, and a
coding mistake can make the results totally unreliable. "It's very
difficult to check whether a given program does the proper
calculation just by inspection," says Georges Gonthier at Microsoft
Research Cambridge, UK. "The computer goes over the code many times,
so it can amplify even the smallest error."

The trick is to use software to check software. Working with a type
of program known as a proof assistant, mathematicians can verify
that each step of a proof is valid. "It's a fairly interactive
process, you type commands into the tool and then the tool will
spellcheck it, if you like," says Leroy. And what if the proof
assistant has a bug? It's always possible, but these programs tend
to be small and relatively easy to check by hand. "More importantly,
this is code that is run over and over again," says Gonthier. "You
have massive experimental data to show that it is computing

However, using proof assistants means embracing a different way of
working. When mathematicians write out proofs, they skip a lot of
the boring details. There is no point in laying out the foundations
of calculus every time, for example. But such shortcuts don't fly
with computers. To work with a proof, they must account for every
logical step, even apparent no-brainers such as why 2 + 2 = 4.

Translating human-written proofs into computer-speak is still an
active area of research. A single proof can take years. One early
breakthrough came in 2005, when Gonthier and his colleagues updated
the proof of the four-colour theorem, making every part of it
computer-readable. Previous versions, ever since Appel and Haken's
work in 1976, relied on an area of maths called graph theory, which
draws on our spatial intuition. Thinking about regions on a map
comes naturally to humans, but not computers. The whole thing needed

"You have to turn everything into algebra, and that forces you to be
more precise," says Gonthier. "That precision ends up paying off."
Gonthier discovered that a part of the proof - widely assumed to be
true because it seemed so obvious - had in fact never been proved at
all because it was deemed not worth the effort. The assumption
turned out to be correct, but it illustrates an added benefit of
extra precision.

Tackling the four-colour theorem was just a warm-up, however. "It
has relatively few uses in the rest of mathematics," says Gonthier.
"It was a brain-teaser." So he turned to the Feit-Thompson theorem,
a large and foundational proof in group theory from the 1960s. For
many years the proof had been built upon and rewritten and it was
eventually published in two books. By formalising it, Gonthier hoped
to demonstrate the computer's capacity to digest a meatier proof
that touched many different branches of mathematics. "The perfect
test case," he says.

It was a success. "In the process they found a couple of minor
mistakes in the books," says Leroy. "They were easily fixable, but
still things that every human mathematician missed." People took
notice, says Gonthier. "I got letters saying how wonderful it was."

In both cases, the result was never in doubt. Gonthier was taking
well-established maths and translating it for computers. But others
have been forced to redo their work in this way just to get their
proofs accepted.

In 1998, Thomas Hales at the University of Pittsburgh, Pennsylvania,
found himself in a similar position to Mochizuki's today. He had
just published a 300-page proof of the Kepler conjecture, a
400-year-old problem that concerns the most efficient ways to stack
a collection of spheres. As with the four-colour theorem, the
possibilities boiled down to variations on a few thousand
arrangements. Hales and his student Samuel Ferguson used a computer
to check them all.

Hales submitted his result to the journal Annals of Mathematics.
Five years later, reviewers for the journal announced they were 99
per cent certain that the proof was correct. "Referees in
mathematics generally do not want to check computer code. They don't
see that as part of their job," says Hales.

Convinced he was right, Hales started to rework his proof in 2003,
so that it could be checked with a proof assistant. It essentially
meant starting all over again, he says. He finally completed the
project last year.
Uncharted terrain

Gonthier's and Hales's research has shown that the approach can be
applied to important mathematics. "The big theorems in maths that
we're proving now seemed a distant dream 10 years ago," says Hales.
But despite advances like the proof assistant, proving things with a
computer is still a laborious process. Most mathematicians don't

That's why some are working in the opposite direction. Rather than
making proof assistants easier to use, Vladimir Voevodsky at the
Institute for Advanced Study in Princeton, New Jersey, wants to make
mathematics more amenable to computers. To do this, he is redefining
its very foundations.

True to type

This is deep stuff. Maths is currently defined in terms of set
theory, essentially the study of collections of objects. For
example, the number zero is defined as the empty set, the collection
of no objects. One is defined as the set containing one empty set.

From there you can build an infinity of numbers. Most mathematicians
don't worry about this on a day-to-day basis. "People are expected
to understand each other without going down to that much detail,"
says Voevodsky.

Not so for computers, and that's a problem. There are multiple ways
to define certain mathematical objects in terms of sets. For us,
that doesn't matter, but if two computer proofs use different
definitions for the same thing, they will be incompatible. "We
cannot compare the results, because at the core they are based on
two different things," says Voevodsky. "The existing foundations of
maths don't work very well if you want to get everything in a very
precise form."

Voevodsky's alternative approach swaps sets for types - a stricter
way of defining mathematical objects in which every concept has
exactly one definition. Proofs built with types can also form types
themselves, which isn't the case with sets. This lets mathematicians
formulate their ideas with a proof assistant directly, rather than
having to translate them later. In 2013 Voevodsky and colleagues
published a book explaining the principles behind the new
foundations. In a reversal of the norm, they wrote the book with a
proof assistant and then "unformalised" it to produce something more

This backwards working changes the way mathematicians think, says
Gonthier. "The book is entirely written in non-formalised prose, but
if you have any kind of experience with using the computer system,
you quickly realise that the prose closely reflects what is going on
in the formal system."

It also allows much closer collaboration between large groups of
mathematicians, because they don't have to constantly check each
other's work. "They've really started to popularise the idea that
proof assistants can be good for the working mathematician," says
Leroy. "That's a really exciting development."

And it may be just the beginning. By making maths easer for
computers to understand, Voevodsky's redefinition might take us into
new territory. As he sees it, mathematics is split into four
quadrants (see chart). Applied maths - modelling the airflow over a
wing, for example - involves high complexity but low abstraction.
Pure maths, the kind of pen and paper maths that is far removed from
our everyday lives, involves low complexity but high abstraction.
And school-level maths is neither complex nor abstract. But what
lies in that fourth quadrant?

"It is very difficult at the present to go into the high levels of
complexity and abstraction, because it just doesn't fit into our
heads very well," says Voevodsky. "It somehow requires abilities
that we don't posses." By working with computers, perhaps humans
could access this fourth mathematical realm. We could prove bigger,
bolder and more abstract problems than ever before, pushing our
mastery of maths to ultimate heights.FIG-mg30360301.jpg

Or perhaps we'll be left behind. Last year Alexei Lisitsa and Boris
Konev at the University of Liverpool, UK, published a
computer-assisted proof so long that it totalled 13 gigabytes,
roughly the size of Wikipedia. Each line of the proof is readable,
but for anyone to go through the entire result would take several
tedious lifetimes.

The pair have since optimised their code and reduced the proof to
800 megabytes - a big improvement, but still impossible to digest.
"From a human viewpoint, there's not much difference," says Lisitsa.
Even if you did devote your life to reading something like this, it
would be like studying a photograph pixel-by-pixel, never seeing the
larger picture. "You cannot grasp the idea behind it."

Although it is on a far grander scale, the situation is similar to
the original proof of the four-colour theorem, where mathematicians
could not be sure an exhaustive computer search was correct. "We
still don't know why the result holds true," says Lisitsa. "It could
be a limit of human understanding, because the objects are so huge."

Doron Zeilberger of Rutgers University in Newark, New Jersey, thinks
there will even come a time when human mathematicians will no longer
be able to contribute. "For the next hundred years humans will still
be needed as coaches to guide computers," he says. But after that?
"They could still do it as an intellectual sport, and play each
other like human chess players still do today, even though they are
much inferior to machines."

Zeilberger is an extreme case. He has listed his computer, nicknamed
Shalosh B. Ekhad, as a co-author for decades and thinks humans
should put pen and paper aside to focus on educating our machines.
"The most optimal use of a mathematician's time is knowledge
transfer," he says. "Teach computers all their tricks and let
computers take it from there."

Spiritual discipline

But most mathematicians bristle at the idea of software that churns
out proofs beyond human comprehension. "The idea that computers are
going to replace mathematicians is misplaced," says Gonthier.

Besides, computer mathematicians would risk churning out an
accelerating stream of unread papers. As it stands, scientific
results often fail to garner the recognition they deserve, but the
problem is particularly marked for maths. In 2014 there were more
than 2000 maths papers posted to the online repository
each month, more than in any other discipline, and the rate is
increasing. "If you have too many new results that keep appearing,
many just go unnoticed," says Leroy. Maybe we could at least create
software to read everything and help humans keep up with the
important bits, he says.

Gonthier feels this is missing the point: "Mathematics is not as
much about finding proofs as it is about finding concepts." The
nature of maths itself is under scrutiny. If humans do not
understand a proof, then it doesn't count as maths, says Voevodsky.
"The future of mathematics is more a spiritual discipline than an
applied art. One of the important functions of mathematics is the
development of the human mind."

"To make mathematical proof easier for computers, we must redefine
maths itself"

All of this may be too late for Shinichi Mochizuki, however. His
work is so advanced, so far removed from mainstream maths, that
having a computer check it would be far more difficult than coming
up with the original proof. "I don't even know if it would be
possible to formalise what he's done," says Hales. For now, humans
remain the ultimate judge - even if we don't always trust ourselves.

Leader: "Smart machines may discover things we can't, but we still
matter" [added at the end]

By Jacob Aron
Jacob Aron is a reporter for New Scientist

Tuesday, September 1, 2015

[tt] NYT: Eric Betzig's Life Over the Microscope

Eric Betzig's Life Over the Microscope
by Claudia Dreifus

In October 2014, the Nobel Prize in Chemistry went to three
scientists for their work developing a new class of microscopes that
may well transform biological research by permitting researchers to
observe cellular processes as they happen.

One of the winners was Eric Betzig, 55, a group leader at the
Janelia Research Campus of the Howard Hughes Medical Institute. On
Thursday, the journal Science published a paper by him and his
colleagues describing a microscope powerful enough to observe living
cells with unprecedented detail--a goal he and others have spent
decades pursuing.

I spoke with Dr. Betzig recently for three hours at his laboratory
and office in Ashburn, Va., and again later by telephone. A
condensed and edited version of the conversations follows.

Q. What makes these microscopes different from those most
researchers use in their laboratories today?

A.The big problem with the standard optical microscope--that's the
one in most biology labs--is that they don't magnify enough to see
individual molecules inside a living cell. You can see a lot of
detail, but it's 100 times too coarse for single molecules. With the
more sophisticated electron microscope, you can get down to the
molecular level. But to do that, you have to bombard your sample
with so many electrons that you essentially fry it. This means you
can't see a living process in real time.

What I and others have been trying to do is create microscopes that
can image the building blocks within a cell. The goal is to link the
fields of molecular and cellular biology, and thus unravel the
mystery of how inanimate molecules come together to create life.

Are you a biologist by trade?

You know, I'm not comfortable with labels. I'm trained in physics
but don't think of myself as a physicist. I have a Nobel Prize in
Chemistry, but I certainly don't know any chemistry. I work all the
time with biologists, but any biology I have is skin-deep. If there
is one way I characterize myself, it's as an inventor. My father is
that, too. He spent his life inventing and making tools for the
automotive industry. I grew up around inventors.

When did the quest to build this microscope begin for you?

I started working on this in 1982 as a grad student at Cornell. By
1992, I had my own lab at Bell Laboratories. There, I built what's
called a near-field microscope that worked, to a degree. But this
instrument was still too difficult, slow and damaging to samples to
be useful for biological research on live tissue. I became
frustrated and quit both it and Bell Labs in 1994.

Shortly after that, two experiments I'd done at Bell with this
machine sparked the idea that I published in 1995 and that would
eventually lead to photoactivated localization microscopy--PALM--
10 years later.

Ten years? Why did it take that long?

Well, for one thing, I went through a very depressing period after
leaving Bell. My then-wife and I had just had a baby. I stayed home
as a house husband, trying to figure out what to do next. Should I
go to med school? Become a gourmet chef? I didn't have any plan
except to stop making microscopes.

Astonishingly, a couple of months after quitting, an insight came to
me about how to make the microscope finally work. It came while I
was pushing my child's stroller. The idea involved isolating
individual molecules and measuring their distance. I wrote this up
in a three-page paper, which would later be noted by the Nobel
Committee as one reason for giving me the prize.

Funny thing about that paper: It wasn't much cited, probably only a
hundred times in 20 years. That tells you something about the value
of citations as a metric of impact.

For the next eight years, I worked in private industry, and I
discovered it was even harder to succeed there than in science.

By 2004, I was in another personal crisis, and I looked up my best
friend from the old days at Bell, Harald Hess. Harald had quit Bell
a few years after I did. He was now working for a company that made
equipment to test disk drives and was feeling unsatisfied. So we
began trying to figure things out by taking trips to Yosemite and
Joshua Tree and talking about what the hell we wanted to do with our

I started reading up on all the developments in microscopy of the
past decade. And that led to us to building, in Harald's living
room, the microscope I'd envisioned while pushing my baby's stroller

Were you pleased with what you built?

To a point. PALM had the limitations I mentioned earlier. By 2008, I
became bored and frustrated with it, and started working on other
types of microscopes. By then, I was at Howard Hughes and could work
on anything that interested me. Here I developed the lattice light
sheet microscope, which can image living cells at unprecedented
speeds and often with no damage. But its resolution level wasn't any
better than that of conventional microscopes.

I also worked on a highly advanced SIM microscope, which was begun
by my Janelia colleague Mats Gustafsson, which would allow us to
look at a sample in high resolution and at high speed. Mats occupied
the office next to mine, and he was--I don't say this lightly--
one of the most brilliant people I've met. Unfortunately, he was
diagnosed with a brain tumor after falling off his bike on the way
to work in 2009, and died in 2011.

When Mats died, there was still much work to be done to make his
high-resolution form of SIM compatible with live imaging. After his
death, I inherited much of his instrumentation and a few of his
people. Since then, we've been working to make his higher-resolution
instrument fast and noninvasive enough for live cells.

We believe we've done it. The result is a paper published in
Science. We finally have the tool to understand the cell and the
dynamics of its full complexity.

How has the Nobel affected your life?

It's disrupted my happy life quite a lot. I hate traveling, and
you're constantly asked to give talks. I'm in a second marriage. I
have young children, ages 2 and 5. The emails, the travel have kept
me away from the two things I love the most: my family and my work.
However, this is a problem of my own making. I'm learning to say
"no" more often.

I mean, I'm a guy who's always been insecure, O.K.? You do feel more
confident. On the other hand, insecurity always made me productive.
These days, I sometimes want to slap myself and say: "You gotta keep
pushing. This isn't the end. This is a chapter."
tt mailing list


Hooray for this!


Lisa Feldman Barrett is a professor of psychology at Northeastern
University, is the author of the forthcoming book "How Emotions Are
Made: The New Science of the Mind and Brain."

Boston--IS psychology in the midst of a research crisis?

An initiative called the Reproducibility Project at the University
of Virginia recently reran 100 psychology experiments and found that
over 60 percent of them failed to replicate--that is, their
findings did not hold up the second time around. The results,
published last week in Science, have generated alarm (and in some
cases, confirmed suspicions) that the field of psychology is in poor

But the failure to replicate is not a cause for alarm; in fact, it
is a normal part of how science works.

Suppose you have two well-designed, carefully run studies, A and B,
that investigate the same phenomenon. They perform what appear to be
identical experiments, and yet they reach opposite conclusions.
Study A produces the predicted phenomenon, whereas Study B does not.
We have a failure to replicate.

Does this mean that the phenomenon in question is necessarily
illusory? Absolutely not. If the studies were well designed and
executed, it is more likely that the phenomenon from Study A is true
only under certain conditions. The scientist's job now is to figure
out what those conditions are, in order to form new and better
hypotheses to test.

A number of years ago, for example, scientists conducted an
experiment on fruit flies that appeared to identify the gene
responsible for curly wings. The results looked solid in the tidy
confines of the lab, but out in the messy reality of nature, where
temperatures and humidity varied widely, the gene turned out not to
reliably have this effect. In a simplistic sense, the experiment
"failed to replicate." But in a grander sense, as the evolutionary
biologist Richard Lewontin has noted, "failures" like this helped
teach biologists that a single gene produces different
characteristics and behaviors, depending on the context.

Similarly, when physicists discovered that subatomic particles
didn't obey Newton's laws of motion, they didn't cry out that
Newton's laws had "failed to replicate." Instead, they realized that
Newton's laws were valid only in certain contexts, rather than being
universal, and thus the science of quantum mechanics was born.

In psychology, we find many phenomena that fail to replicate if we
change the context. One of the most famous is called "fear
learning," which has been used to explain anxiety disorders like
post-traumatic stress. Scientists place a rat into a small box with
an electrical grid on the floor. They play a loud tone and then, a
moment later, give the rat an electrical shock. The shock causes the
rat to freeze and its heart rate and blood pressure to rise. The
scientists repeat this process many times, pairing the tone and the
shock, with the same results. Eventually, they play the tone without
the shock, and the rat responds in the same way, as if expecting the

Originally this "fear learning" was assumed to be a universal law,
but then other scientists slightly varied the context and the rats
stopped freezing. For example, if you restrain the rat during the
tone (which shouldn't matter if the rat is going to freeze anyway),
its heart rate goes down instead of up. And if the cage design
permits, the rat will run away rather than freeze.

These failures to replicate did not mean that the original
experiments were worthless. Indeed, they led scientists to the
crucial understanding that a freezing rat was actually responding to
the uncertainty of threat, which happened to be engendered by
particular combinations of tone, cage and shock.

Much of science still assumes that phenomena can be explained with
universal laws and therefore context should not matter. But this is
not how the world works. Even a simple statement like "the sky is
blue" is true only at particular times of day, depending on the mix
of molecules in the air as they reflect and scatter light, and on
the viewer's experience of color.

Psychologists are usually well attuned to the importance of context.
In our experiments, we take great pains to avoid any irregularities
or distractions that might affect the results. But when it comes to
replication, psychologists and their critics often seem to forget
the powerful and subtle effects of context. They ask simply, "Did
the experiment work or not?" rather than considering a failure to
replicate as a valuable scientific clue.

As with any scientific field, psychology has some published studies
that were conducted sloppily, and a few bad eggs who have falsified
their data. But contrary to the implication of the Reproducibility
Project, there is no replication crisis in psychology. The "crisis"
may simply be the result of a misunderstanding of what science is.

Science is not a body of facts that emerge, like an orderly string
of light bulbs, to illuminate a linear path to universal truth.
Rather, science (to paraphrase Henry Gee, an editor at Nature) is a
method to quantify doubt about a hypothesis, and to find the
contexts in which a phenomenon is likely. Failure to replicate is
not a bug; it is a feature. It is what leads us along the path--
the wonderfully twisty path--of scientific discovery.
tt mailing list