Wednesday, February 27, 2013

Why everyone can be creative

How To Be Creative
The image of the 'creative type' is a myth. Jonah Lehrer on why anyone can innovate—and why a hot shower, a cold beer or a trip to your colleague's desk might be the key to your next big idea.
Creativity can seem like magic. We look at people like Steve Jobs and Bob Dylan, and we conclude that they must possess supernatural powers denied to mere mortals like us, gifts that allow them to imagine what has never existed before. They're "creative types." We're not.
Description: http://m.wsj.net/video/20120309/030912lehrer/030912lehrer_512x288.jpg
The myth of the "creative type" is just that--a myth, argues Jonah Lehrer. In an interview with WSJ's Gary Rosen he explains the evidence suggesting everyone has the potential to be the next Milton Glaser or Yo-Yo Ma.
But creativity is not magic, and there's no such thing as a creative type. Creativity is not a trait that we inherit in our genes or a blessing bestowed by the angels. It's a skill. Anyone can learn to be creative and to get better at it. New research is shedding light on what allows people to develop world-changing products and to solve the toughest problems. A surprisingly concrete set of lessons has emerged about what creativity is and how to spark it in ourselves and our work.
The science of creativity is relatively new. Until the Enlightenment, acts of imagination were always equated with higher powers. Being creative meant channeling the muses, giving voice to the gods. ("Inspiration" literally means "breathed upon.") Even in modern times, scientists have paid little attention to the sources of creativity.
But over the past decade, that has begun to change. Imagination was once thought to be a single thing, separate from other kinds of cognition. The latest research suggests that this assumption is false. It turns out that we use "creativity" as a catchall term for a variety of cognitive tools, each of which applies to particular sorts of problems and is coaxed to action in a particular way.
Description: CREATING0310jp
It isn't a trait that we inherit in our genes or a blessing bestowed on us by the angels. It's a skill that anyone can learn and work to improve.
Does the challenge that we're facing require a moment of insight, a sudden leap in consciousness? Or can it be solved gradually, one piece at a time? The answer often determines whether we should drink a beer to relax or hop ourselves up on Red Bull, whether we take a long shower or stay late at the office.
The new research also suggests how best to approach the thorniest problems. We tend to assume that experts are the creative geniuses in their own fields. But big breakthroughs often depend on the naive daring of outsiders. For prompting creativity, few things are as important as time devoted to cross-pollination with fields outside our areas of expertise.
Let's start with the hardest problems, those challenges that at first blush seem impossible. Such problems are typically solved (if they are solved at all) in a moment of insight.
Consider the case of Arthur Fry, an engineer at 3M in the paper products division. In the winter of 1974, Mr. Fry attended a presentation by Sheldon Silver, an engineer working on adhesives. Mr. Silver had developed an extremely weak glue, a paste so feeble it could barely hold two pieces of paper together. Like everyone else in the room, Mr. Fry patiently listened to the presentation and then failed to come up with any practical applications for the compound. What good, after all, is a glue that doesn't stick?
On a frigid Sunday morning, however, the paste would re-enter Mr. Fry's thoughts, albeit in a rather unlikely context. He sang in the church choir and liked to put little pieces of paper in the hymnal to mark the songs he was supposed to sing. Unfortunately, the little pieces of paper often fell out, forcing Mr. Fry to spend the service frantically thumbing through the book, looking for the right page. It seemed like an unfixable problem, one of those ordinary hassles that we're forced to live with.
But then, during a particularly tedious sermon, Mr. Fry had an epiphany. He suddenly realized how he might make use of that weak glue: It could be applied to paper to create a reusable bookmark! Because the adhesive was barely sticky, it would adhere to the page but wouldn't tear it when removed. That revelation in the church would eventually result in one of the most widely used office products in the world: the Post-it Note.
Mr. Fry's invention was a classic moment of insight. Though such events seem to spring from nowhere, as if the cortex is surprising us with a breakthrough, scientists have begun studying how they occur. They do this by giving people "insight" puzzles, like the one that follows, and watching what happens in the brain:
A man has married 20 women in a small town. All of the women are still alive, and none of them is divorced. The man has broken no laws. Who is the man?
If you solved the question, the solution probably came to you in an incandescent flash: The man is a priest. Research led by Mark Beeman and John Kounios has identified where that flash probably came from. In the seconds before the insight appears, a brain area called the superior anterior temporal gyrus (aSTG) exhibits a sharp spike in activity. This region, located on the surface of the right hemisphere, excels at drawing together distantly related information, which is precisely what's needed when working on a hard creative problem.
Interestingly, Mr. Beeman and his colleagues have found that certain factors make people much more likely to have an insight, better able to detect the answers generated by the aSTG. For instance, exposing subjects to a short, humorous video—the scientists use a clip of Robin Williams doing stand-up—boosts the average success rate by about 20%.
Alcohol also works. Earlier this year, researchers at the University of Illinois at Chicago compared performance on insight puzzles between sober and intoxicated students. The scientists gave the subjects a battery of word problems known as remote associates, in which people have to find one additional word that goes with a triad of words. Here's a sample problem:
Pine Crab Sauce
In this case, the answer is "apple." (The compound words are pineapple, crab apple and apple sauce.) Drunk students solved nearly 30% more of these word problems than their sober peers.
What explains the creative benefits of relaxation and booze? The answer involves the surprising advantage of not paying attention. Although we live in an age that worships focus—we are always forcing ourselves to concentrate, chugging caffeine—this approach can inhibit the imagination. We might be focused, but we're probably focused on the wrong answer.
And this is why relaxation helps: It isn't until we're soothed in the shower or distracted by the stand-up comic that we're able to turn the spotlight of attention inward, eavesdropping on all those random associations unfolding in the far reaches of the brain's right hemisphere. When we need an insight, those associations are often the source of the answer.
This research also explains why so many major breakthroughs happen in the unlikeliest of places, whether it's Archimedes in the bathtub or the physicist Richard Feynman scribbling equations in a strip club, as he was known to do. It reveals the wisdom of Google putting ping-pong tables in the lobby and confirms the practical benefits of daydreaming. As Einstein once declared, "Creativity is the residue of time wasted."
Of course, not every creative challenge requires an epiphany; a relaxing shower won't solve every problem. Sometimes, we just need to keep on working, resisting the temptation of a beer-fueled nap.
There is nothing fun about this kind of creativity, which consists mostly of sweat and failure. It's the red pen on the page and the discarded sketch, the trashed prototype and the failed first draft. Nietzsche referred to this as the "rejecting process," noting that while creators like to brag about their big epiphanies, their everyday reality was much less romantic. "All great artists and thinkers are great workers," he wrote.
This relentless form of creativity is nicely exemplified by the legendary graphic designer Milton Glaser, who engraved the slogan "Art is Work" above his office door. Mr. Glaser's most famous design is a tribute to this work ethic. In 1975, he accepted an intimidating assignment: to create a new ad campaign that would rehabilitate the image of New York City, which at the time was falling apart.
Mr. Glaser began by experimenting with fonts, laying out the tourist slogan in a variety of friendly typefaces. After a few weeks of work, he settled on a charming design, with "I Love New York" in cursive, set against a plain white background. His proposal was quickly approved. "Everybody liked it," Mr. Glaser says. "And if I were a normal person, I'd stop thinking about the project. But I can't. Something about it just doesn't feel right."
So Mr. Glaser continued to ruminate on the design, devoting hours to a project that was supposedly finished. And then, after another few days of work, he was sitting in a taxi, stuck in midtown traffic. "I often carry spare pieces of paper in my pocket, and so I get the paper out and I start to draw," he remembers. "And I'm thinking and drawing and then I get it. I see the whole design in my head. I see the typeface and the big round red heart smack dab in the middle. I know that this is how it should go."
The logo that Mr. Glaser imagined in traffic has since become one of the most widely imitated works of graphic art in the world. And he only discovered the design because he refused to stop thinking about it.
But this raises an obvious question: If different kinds of creative problems benefit from different kinds of creative thinking, how can we ensure that we're thinking in the right way at the right time? When should we daydream and go for a relaxing stroll, and when should we keep on sketching and toying with possibilities?
The good news is that the human mind has a surprising natural ability to assess the kind of creativity we need. Researchers call these intuitions "feelings of knowing," and they occur when we suspect that we can find the answer, if only we keep on thinking. Numerous studies have demonstrated that, when it comes to problems that don't require insights, the mind is remarkably adept at assessing the likelihood that a problem can be solved—knowing whether we're getting "warmer" or not, without knowing the solution.
This ability to calculate progress is an important part of the creative process. When we don't feel that we're getting closer to the answer—we've hit the wall, so to speak—we probably need an insight. If there is no feeling of knowing, the most productive thing we can do is forget about work for a while. But when those feelings of knowing are telling us that we're getting close, we need to keep on struggling.
Of course, both moment-of-insight problems and nose-to-the-grindstone problems assume that we have the answers to the creative problems we're trying to solve somewhere in our heads. They're both just a matter of getting those answers out. Another kind of creative problem, though, is when you don't have the right kind of raw material kicking around in your head. If you're trying to be more creative, one of the most important things you can do is increase the volume and diversity of the information to which you are exposed.
Steve Jobs famously declared that "creativity is just connecting things." Although we think of inventors as dreaming up breakthroughs out of thin air, Mr. Jobs was pointing out that even the most far-fetched concepts are usually just new combinations of stuff that already exists. Under Mr. Jobs's leadership, for instance, Apple didn't invent MP3 players or tablet computers—the company just made them better, adding design features that were new to the product category.
And it isn't just Apple. The history of innovation bears out Mr. Jobs's theory. The Wright Brothers transferred their background as bicycle manufacturers to the invention of the airplane; their first flying craft was, in many respects, just a bicycle with wings. Johannes Gutenberg transformed his knowledge of wine presses into a printing machine capable of mass-producing words. Or look at Google: Larry Page and Sergey Brin came up with their famous search algorithm by applying the ranking method used for academic articles (more citations equals more influence) to the sprawl of the Internet.
How can people get better at making these kinds of connections? Mr. Jobs argued that the best inventors seek out "diverse experiences," collecting lots of dots that they later link together. Instead of developing a narrow specialization, they study, say, calligraphy (as Mr. Jobs famously did) or hang out with friends in different fields. Because they don't know where the answer will come from, they are willing to look for the answer everywhere.
Recent research confirms Mr. Jobs's wisdom. The sociologist Martin Ruef, for instance, analyzed the social and business relationships of 766 graduates of the Stanford Business School, all of whom had gone on to start their own companies. He found that those entrepreneurs with the most diverse friendships scored three times higher on a metric of innovation. Instead of getting stuck in the rut of conformity, they were able to translate their expansive social circle into profitable new concepts.
Many of the most innovative companies encourage their employees to develop these sorts of diverse networks, interacting with colleagues in totally unrelated fields. Google hosts an internal conference called Crazy Search Ideas—a sort of grown-up science fair with hundreds of posters from every conceivable field. At 3M, engineers are typically rotated to a new division every few years. Sometimes, these rotations bring big payoffs, such as when 3M realized that the problem of laptop battery life was really a problem of energy used up too quickly for illuminating the screen. 3M researchers applied their knowledge of see-through adhesives to create an optical film that focuses light outward, producing a screen that was 40% more efficient.
Such solutions are known as "mental restructurings," since the problem is only solved after someone asks a completely new kind of question. What's interesting is that expertise can inhibit such restructurings, making it harder to find the breakthrough. That's why it's important not just to bring new ideas back to your own field, but to actually try to solve problems in other fields—where your status as an outsider, and ability to ask naive questions, can be a tremendous advantage.
This principle is at work daily on InnoCentive, a crowdsourcing website for difficult scientific questions. The structure of the site is simple: Companies post their hardest R&D problems, attaching a monetary reward to each "challenge." The site features problems from hundreds of organization in eight different scientific categories, from agricultural science to mathematics. The challenges on the site are incredibly varied and include everything from a multinational food company looking for a "Reduced Fat Chocolate-Flavored Compound Coating" to an electronics firm trying to design a solar-powered computer.
The most impressive thing about InnoCentive, however, is its effectiveness. In 2007, Karim Lakhani, a professor at the Harvard Business School, began analyzing hundreds of challenges posted on the site. According to Mr. Lakhani's data, nearly 30% of the difficult problems posted on InnoCentive were solved within six months. Sometimes, the problems were solved within days of being posted online. The secret was outsider thinking: The problem solvers on InnoCentive were most effective at the margins of their own fields. Chemists didn't solve chemistry problems; they solved molecular biology problems. And vice versa. While these people were close enough to understand the challenge, they weren't so close that their knowledge held them back, causing them to run into the same stumbling blocks that held back their more expert peers.
It's this ability to attack problems as a beginner, to let go of all preconceptions and fear of failure, that's the key to creativity.
The composer Bruce Adolphe first met Yo-Yo Ma at the Juilliard School in New York City in 1970. Mr. Ma was just 15 years old at the time (though he'd already played for J.F.K. at the White House). Mr. Adolphe had just written his first cello piece. "Unfortunately, I had no idea what I was doing," Mr. Adolphe remembers. "I'd never written for the instrument before."
Mr. Adolphe had shown a draft of his composition to a Juilliard instructor, who informed him that the piece featured a chord that was impossible to play. Before Mr. Adolphe could correct the music, however, Mr. Ma decided to rehearse the composition in his dorm room. "Yo-Yo played through my piece, sight-reading the whole thing," Mr. Adolphe says. "And when that impossible chord came, he somehow found a way to play it."
Mr. Adolphe told Mr. Ma what the professor had said and asked how he had managed to play the impossible chord. They went through the piece again, and when Mr. Ma came to the impossible chord, Mr. Adolphe yelled "Stop!" They looked at Mr. Ma's left hand—it was contorted on the fingerboard, in a position that was nearly impossible to hold. "You're right," said Mr. Ma, "you really can't play that!" Yet, somehow, he did.
When Mr. Ma plays today, he still strives for that state of the beginner. "One needs to constantly remind oneself to play with the abandon of the child who is just learning the cello," Mr. Ma says. "Because why is that kid playing? He is playing for pleasure."
Creativity is a spark. It can be excruciating when we're rubbing two rocks together and getting nothing. And it can be intensely satisfying when the flame catches and a new idea sweeps around the world.
For the first time in human history, it's becoming possible to see how to throw off more sparks and how to make sure that more of them catch fire. And yet, we must also be honest: The creative process will never be easy, no matter how much we learn about it. Our inventions will always be shadowed by uncertainty, by the serendipity of brain cells making a new connection.
Every creative story is different. And yet every creative story is the same: There was nothing, now there is something. It's almost like magic.
—Adapted from "Imagine: How Creativity Works" by Jonah Lehrer, to be published by Houghton Mifflin Harcourt on March 19. Copyright © 2012 by Jonah Lehrer.
10 Quick Creativity Hacks
1. Color Me Blue
A 2009 study found that subjects solved twice as many insight puzzles when surrounded by the color blue, since it leads to more relaxed and associative thinking. Red, on other hand, makes people more alert and aware, so it is a better backdrop for solving analytic problems.
2. Get Groggy
According to a study published last month, people at their least alert time of day—think of a night person early in the morning—performed far better on various creative puzzles, sometimes improving their success rate by 50%. Grogginess has creative perks.
Description: CREATIVEJmp1
#3 Don't Be Afraid to Daydream
3. Daydream Away
Research led by Jonathan Schooler at the University of California, Santa Barbara, has found that people who daydream more score higher on various tests of creativity.
4. Think Like A Child
When subjects are told to imagine themselves as 7-year-olds, they score significantly higher on tests of divergent thinking, such as trying to invent alternative uses for an old car tire.
5. Laugh It Up
Description: CREATIVEJmp2
When people are exposed to a short video of stand-up comedy, they solve about 20% more insight puzzles.
When people are exposed to a short video of stand-up comedy, they solve about 20% more insight puzzles.
6. Imagine That You Are Far Away
Research conducted at Indiana University found that people were much better at solving insight puzzles when they were told that the puzzles came from Greece or California, and not from a local lab.
7. Keep It Generic
One way to increase problem-solving ability is to change the verbs used to describe the problem. When the verbs are extremely specific, people think in narrow terms. In contrast, the use of more generic verbs—say, "moving" instead of "driving"—can lead to dramatic increases in the number of problems solved.
Description: CREATIVEJmp3
According to a new study, volunteers performed significantly better on a standard test of creativity when they were seated outside a 5-footsquare workspace, perhaps because they internalized the metaphor of thinking outside the box. The lesson? Your cubicle is holding you back.
8. Work Outside the Box
According to new study, volunteers performed significantly better on a standard test of creativity when they were seated outside a 5-foot-square workspace, perhaps because they internalized the metaphor of thinking outside the box. The lesson? Your cubicle is holding you back.
9. See the World
According to research led by Adam Galinsky, students who have lived abroad were much more likely to solve a classic insight puzzle. Their experience of another culture endowed them with a valuable open-mindedness. This effect also applies to professionals: Fashion-house directors who have lived in many countries produce clothing that their peers rate as far more creative.
10. Move to a Metropolis
Physicists at the Santa Fe Institute have found that moving from a small city to one that is twice as large leads inventors to produce, on average, about 15% more patents.
—Jonah Lehrer
A version of this article appeared Mar. 10, 2012, on page C1 in some U.S. editions of The Wall Street Journal, with the headline: How to Be CreativeHow To Be Creative.

Does biology make us liars?

Biologists since E. O. Wilson are finding ever more evidence that what we used to call philosophy or culture are simply biological processes. http://www.aldaily.com has a link to an article about biological foundations of self-deception. The whole article (from New Republic) is interesting, but here are the three paragraphs that I liked best.
--ww--

"Can we at least measure the price of our own self-deceit? Trivers offers the interesting suggestion that the strain associated with lying, even unconsciously, takes a toll on the immune system. The reason for this is that immunity is expensive, requiring the burning of energy and consumption of much protein. For the same reason, the immune system has a reservoir that can be drawn upon for other purposes—often, Trivers claims, at the mere “flick of a molecular switch.” A decision has to be made: attack another male for the chance of sex with a female, or invest internally to fight a parasite? Quickly the body apportions its resources, diverting from the immune reservoirs to the fighting mode. No surprise, then, that high testosterone levels are associated with lower immune response, or that disease is associated with lower testosterone levels (the body is shifting investment to the immune system), or that marriage, which lowers testosterone levels in men, is associated with increased lifespan. Monogamy, in other words, can be seen as a disease that improves our health.

"The salient point is that choices involving psychology, of greater or lesser degrees of self-deception, in turn affect our immune systems. Trivers cites studies that show that people who write about their trauma can improve their immune function; indeed, emotional disclosure is associated with consistent immune benefits—this is one of the reasons that going to a shrink might make you feel better. The converse holds as well: HIV-positive patients who deny that they are infected show lower immune function than those who admit it, and tend to suffer from more rapid progression of their disease. Truth seems to be healthy for us. On those grounds alone, Trivers writes, “don’t ask, don’t tell” should be considered an immunological disaster.

...

"Two groups were randomly assigned, and members of the first group were asked to write for five minutes about a situation in which they felt powerful while candy was being distributed among them; at the same time members of the other group were asked to write about a situation of powerlessness and were only allowed to request candy but not to be given any. When all the subjects were asked to snap the fingers of their right hand five times and quickly write the letter E on their forehead, those who had been primed to feel powerless were three times more likely to write the E so that others could read it rather than backward, from their own perspective. Further study showed that the power-primed group was significantly less able to discriminate among human facial expressions associated with fear, anger, sadness, and happiness. It would appear that the ability to apprehend the world correctly, as well as the ability to empathize, is compromised by the feeling of power.

“The ultimate effect of shielding people from the effects of their folly,” Herbert Spencer once observed, “is to fill the world with fools.”
# # #

The Economist on driverless cars

The driverless road ahead

Carmakers are starting to take autonomous vehicles seriously. Other businesses should too


THE arrival of the mass-produced car, just over a century ago, caused an explosion of business creation. First came the makers of cars and all the parts that go into them. Then came the garages, filling stations and showrooms. Then all sorts of other car-dependent businesses: car parks, motels, out-of-town shopping centres. Commuting by car allowed suburbs to spread, making fortunes for prescient housebuilders and landowners. Roadbuilding became a far bigger business, whereas blacksmiths, farriers and buggy-whip makers faded away as America’s horse and mule population fell from 26m in 1915 to 3m in 1960.
Now another revolution on wheels is on the horizon: the driverless car. Nobody is sure when it will arrive. Google, which is testing a fleet of autonomous cars, thinks in maybe a decade, others reckon longer. A report from KPMG and the Centre for Automotive Research in Michigan concludes that it will come “sooner than you think”. And, when it does, the self-driving car, like the ordinary kind, could bring profound change.
Just imagine. It could, for a start, save the motor industry from stagnation. Carmakers are fretting at signs that smartphone-obsessed teenagers these days do not rush to get a driving licence and buy their first car, as their parents did. Their fear is that the long love affair with the car is fading. But once they are spared the trouble and expense of taking lessons and passing a test, young adults might rediscover the joys of the open road. Another worry for the motor industry is that car use seems to be peaking in the most congested cities. Yet automated cars would drive nose-to-tail, increasing the capacity of existing roads; and since they would be able to drop off their passengers and drive away, the lack of parking spaces in town might not matter so much.
Cars have always been about status as well as mobility; many people would still want to own a trophy car. These might not clock up much mileage, so carmakers would have to become more like fashion houses, constantly creating new designs to get people to swap their motors long before they have worn out. But cars that are driverless may not need steering wheels, pedals and other manual controls; and, being virtually crashless (most road accidents are due to human error), their bodies could be made much lighter. So makers would be able to turn out new models quicker and at lower cost. Fresh entrants to carmaking could prove nimbler than incumbents at adapting to this new world.
All these trends will affect the car business. But when mass-produced cars appeared, they had an impact on the whole of society. What might be the equivalent social implications of driverless cars? And who might go the same way as the buggy-whip makers? Electronics and software firms will be among the winners: besides providing all the sensors and computing power that self-driving cars will need, they will enjoy strong demand for in-car entertainment systems, since cars’ occupants will no longer need to keep their eyes on the road. Bus companies might run convoys of self-piloting coaches down the motorways, providing competition for intercity railways. Travelling salesmen might prefer to journey from city to city overnight in driverless Winnebagos packed with creature comforts. So, indeed, might some tourists. If so, they will need fewer hotel rooms.
Cabbies, lorry drivers and all others whose job is to steer a vehicle will have to find other work. The taxi and car-rental businesses might merge into one automated pick-up and drop-off service: GM has already shown a prototype of a two-seater, battery-powered pod that would scuttle about town, with passengers summoning it by smartphone. Supermarkets, department stores and shopping centres might provide these free, to attract customers. Driverless cars will be programmed to obey the law, which means, sadly, the demise of the traffic cop and the parking warden. And since automated cars will reduce the need for parking spaces in town, that will mean less revenue for local authorities and car-park operators.
When people are no longer in control of their cars they will not need driver insurance—so goodbye to motor insurers and brokers. Traffic accidents now cause about 2m hospital visits a year in America alone, so autonomous vehicles will mean much less work for emergency rooms and orthopaedic wards. Roads will need fewer signs, signals, guard rails and other features designed for the human driver; their makers will lose business too. When commuters can work, rest or play while the car steers itself, longer commutes will become more bearable, the suburbs will spread even farther and house prices in the sticks will rise. When self-driving cars can ferry children to and from school, more mothers may be freed to re-enter the workforce. The popularity of the country pub, which has been undermined by strict drink-driving laws, may be revived. And so on.
Getting there from here
All this may sound far-fetched. But the self-driving car is already arriving in dribs and drabs. Cars are on sale that cruise on autopilot, slot themselves into awkward parking spaces and brake automatically to avert collisions. Motorists seem ready to pay for such features, encouraging carmakers to keep working on them. The armed forces are also sponsoring research on autonomous vehicles. Some insurers offer discounts to drivers who put a black box in their cars to measure how safely they drive: as cars’ computers get better than humans at avoiding accidents, self-drive mode may become the norm, and manual driving uninsurable.
The first airline to operate a regular international schedule began in 1919, only 16 years after the Wright Brothers showed that people really could fly in heavier-than-air planes. For those businesses that stand to gain and lose from the driverless car, the future may arrive even quicker.
# # #

Fast Brain, Slow Brain

During the past year, I read a book that taught me more about how I think and why I do the things I do than
nearly all of the other books that I have ever read. The bad news is that we are all hard-wired to be
irrational. The good news is that by knowing this, we can try to notice when we are lettting our hard-wiring take over, and then interrupt ourselves and perhaps choose something else that is rational. Nancy reached
many of Kahneman's conclusions intuitively...I was always in awe of her insights...so I know that she would
have recommended "Thinking: Fast and Slow" to you.
I tried to write my own summary of it, but long before I had a version that I liked, I went to Amazon.com
and found these excerpts from various reviews of it.

"Drawing on decades of research in psychology that resulted in a Nobel Prize in Economic Sciences, Daniel Kahneman takes readers on an exploration of what influences thought example by example, sometimes with unlikely word pairs like "vomit and banana." System 1 and System 2, the fast and slow types of thinking, become characters that illustrate the psychology behind things we think we understand but really don't, such as intuition. Kahneman's transparent and careful treatment of his subject has the potential to change how we think, not just about thinking, but about how we live our lives. Thinking, Fast and Slow gives deep--and sometimes frightening--insight about what goes on inside our heads: the psychological basis for reactions, judgments, recognition, choices, conclusions, and much more." --JoVon Sotak

“Brilliant . . . It is impossible to exaggerate the importance of Daniel Kahneman’s contribution to the understanding of the way we think and choose. He stands among the giants, a weaver of the threads of Charles Darwin, Adam Smith and Sigmund Freud. Arguably the most important psychologist in history, Kahneman has reshaped cognitive psychology, the analysis of rationality and reason, the understanding of risk and the study of happiness and well-being . . . A magisterial work, stunning in its ambition, infused with knowledge, laced with wisdom, informed by modesty and deeply humane. If you can read only one book this year, read this one.”— Janice Gross Stein, The Globe and Mail

“A sweeping, compelling tale of just how easily our brains are bamboozled, bringing in both his own research and that of numerous psychologists, economists, and other experts...Kahneman has a remarkable ability to take decades worth of research and distill from it what would be important and interesting for a lay audience...Thinking, Fast and Slow is an immensely important book. Many science books are uneven, with a useful or interesting chapter too often followed by a dull one. Not so here. With rare exceptions, the entire span of this weighty book is fascinating and applicable to day-to-day life. Everyone should read Thinking, Fast and Slow.” —Jesse Singal, Boston Globe

“Absorbingly articulate and infinitely intelligent . . . What's most enjoyable and compelling about Thinking, Fast and Slow is that it's so utterly, refreshingly anti-Gladwellian. There is nothing pop about Kahneman's psychology, no formulaic story arc, no beating you over the head with an artificial, buzzword-encrusted Big Idea. It's just the wisdom that comes from five decades of honest, rigorous scientific work, delivered humbly yet brilliantly, in a way that will forever change the way you think about thinking.”—Maria Popova, The Atlantic

“Profound . . . As Copernicus removed the Earth from the centre of the universe and Darwin knocked humans off their biological perch, Mr. Kahneman has shown that we are not the paragons of reason we assume ourselves to be.” —The Economist

“[Kahneman’s] disarmingly simple experiments have profoundly changed the way that we think about thinking . . . We like to see ourselves as a Promethean species, uniquely endowed with the gift of reason. But Mr. Kahneman’s simple experiments reveal a very different mind, stuffed full of habits that, in most situations, lead us astray.” —Jonah Lehrer, The Wall Street Journal

# # #

Why it is so hard to distinguish science from pseudo-science

Fifteen hundred years before the birth of Christ, a chunk of stuff blew off the planet Jupiter. That chunk soon became an enormous comet, approaching Earth several times around the period of the exodus of the Jews from Egypt and Joshua’s siege of Jericho. The ensuing havoc included the momentary stopping and restarting of the Earth’s rotation; the introduction into its crust of organic chemicals (including a portion of the world’s petroleum reserves); the parting of the Red Sea, induced by a massive electrical discharge from the comet to Earth; showers of iron dust and edible carbohydrates falling from the comet’s tail, the first turning the waters red and the second nourishing the Israelites in the desert; and plagues of vermin, either infecting Earth from organisms carried in the comet’s tail or caused by the rapid multiplication of earthly toads and bugs induced by the scorching heat of cometary gases. Eventually, the comet settled down to a quieter life as the planet Venus, which, unlike the other planets, is an ingĂ©nue at just 3500 years old. Disturbed by the new girl in the neighbourhood, Mars too began behaving badly, closely encountering Earth several times between the eighth and seventh centuries BCE; triggering massive earthquakes, lava flows, tsunamis and atmospheric fire storms; causing the sudden extinction of many species (including the mammoth); shifting Earth’s spin axis and relocating the North Pole from Baffin Island to its present position; and abruptly changing the length of the terrestrial year from 360 to its present 365¼ days. There were also further shenanigans involving Saturn and Mercury.
If this story makes you feel even the slightest stab of recognition, you’re probably at least fifty years old, because it’s a summary of the key ideas in Immanuel Velikovsky’s Worlds in Collision. Published in New York in 1950, the book is now almost forgotten, but it was one of the greatest cultural sensations of the Cold War era. Before it was printed, it was trailed in magazines, and immediately shot onto the American bestseller lists, where it stayed for months, grabbing the attention and occupying the energies of both enthusiasts and enraged critics. The brouhaha subsided after a few years, but the so-called Velikovsky affair erupted with greater violence in the late 1960s and early 1970s, when the author gathered a gaggle of disciples and lectured charismatically (and at times incomprehensibly) to large and enraptured campus audiences. Velikovsky’s story was chewed over by philosophers and sociologists convinced of its absurdity, some trying to find standards through which one could securely establish the grounds of its obvious wrong-headedness, others edgily exploring the radical possibility that no such standards existed and reflecting on what that meant for so-called demarcation criteria between science and other forms of knowledge.
Worlds in Collision was Velikovsky’s blockbuster – I haven’t found exact sales figures, though there is an estimate from the late 1970s that millions of copies had been sold, with translations into many major languages – but there were follow-up volumes through the 1970s, fleshing out the basic astronomical-historical picture and offering ingeniously reflexive accounts of the developing controversies over his theories. By the late 1960s and 1970s, Velikovsky’s books must have been in most American college dorm rooms. Other countries were not nearly as besotted, but neither were they immune: in 1972, both the BBC and the Canadian Broadcasting Corporation produced respectful documentaries on the man and his views. Velikovskianism had gained so much traction in America that in 1974 there was a huge set-piece debate over his views at the annual meeting of the American Association for the Advancement of Science. His scientific opponents reckoned he was ‘quite out of his tree’, while some of his acolytes – and these included an assortment of scientists with appropriate credentials – were of the opinion that Velikovsky was ‘perhaps the greatest brain that our race has produced’.
Velikovsky appeared in American culture pretty much as a man from Mars: he was almost unknown to the intellectual communities whose expertise his book most directly engaged. Born in Vitebsk (now in Belarus) in 1895 to well-off Jewish parents, Velikovsky studied a wide range of subjects at Montpellier and Edinburgh before taking his medical degree in Moscow in 1921. Emigrating to Berlin, then to Vienna and Palestine, he learned psychoanalysis under Freud’s pupil Wilhelm Stekel and practised as a psychiatrist before escaping the Nazis in 1939 and living first on the Upper West Side of Manhattan and later in Princeton. From that point on, he never had an academic appointment or regular salaried employment, apparently supporting himself with money inherited from his father, a bit of practice as a shrink and, later, book royalties and fees for speaking engagements. When you look up Velikovsky online, he’s most often described as a psychiatrist or psychoanalyst. Despite the celebrity of his astronomical-historical stories, and despite the fact that he almost entirely gave up the couch when he became a celebrity, that’s quite right.
Between the wars, Velikovsky turned himself into one of those then common Central European scholars of enormous intellectual range, always seeking the Big Unifying Idea. His interest in planetary astronomy was a late development: it was psychoanalysis and Jewish history that were the keys to the story in Worlds in Collision. A Zionist, though not notably religious, Velikovsky was infuriated by Freud’s last book, Moses and Monotheism (1937), which claimed that Moses wasn’t actually Jewish but a runaway Egyptian priest from Pharaoh Akhenaten’s monotheistic sun religion, later murdered by the Israelites, who ended up fabricating a syncretic deity from an Egyptian sun god and a Midianite volcano god called Jehovah. Subsequently, the idea of a Messiah was concocted as an expression of guilt for father-murder, a sense of guilt which has been handed down to the Jews as a common psychological inheritance. The historical account Judaism offered of itself was therefore, according to Freud, a form of dream-work, a collective repressed memory needing skilled decoding by modern interpreters. To Velikovsky, all this was yet another manifestation of Freud’s Jewish self-hatred. How dare he impugn the Old Testament story about who the Jews were and how they came to be Chosen? But Freud’s methods in Moses and Monotheism nevertheless signalled a productive new way of interpreting human history, one in which psychoanalytic techniques could effectively expose the true meaning of the world’s dream-myths.
At the same time, Velikovsky was convinced that the Old Testament, decoded in this way, was an overwhelmingly reliable historical account, that the Jewish records could be used as a standard to calibrate archives of dream-myths – from the Egyptian and Greek to the Chinese and Choctaw – and that, once this radical reinterpretation of world religions was achieved, we would have an accurate account of the physical events that had occurred in historical times and were encrypted in the dream-myths.
Although Worlds in Collision was a pastiche of comparative mythology and planetary astronomy, its major purpose was a radical reconstruction of history. Velikovsky had worked through the annals of myth and ancient history, which substantially supported each other and told the same historical stories; the Jewish story and its chronologies could be used reliably to gauge all the others. The apparent datings of events did differ, but a wholesale recalibration of ancient chronology was both possible and necessary. The ancient historians had got their dates badly wrong, and so too had the astronomers, biologists and geologists, who now needed to understand that spectacular cosmic catastrophes had happened and that historical methods of interpreting ancient texts could be used to establish radically unorthodox scientific stories. Properly understood, Jewish history not only laid bare the inaccuracy of scientific accounts, it securely established the reality of natural events and processes which scientists assumed could not possibly have happened.
It was American scientists who went ballistic over Velikovsky, not historians, and one purpose of Michael Gordin’s probing and intelligent The Pseudoscience Wars is to ask why they responded to Velikovsky as they did. Putting that sort of question is a sign of changed times. Passions have cooled; circumstances have altered. Almost all previous books about Velikovsky and the affair have been for or against, celebratory or accusatory, justifying the way the scientific community handled the business or criticising them for handling it badly. There’s no evidence that Gordin considers Velikovsky’s theories anything but nutty, yet affirming and identifying their nuttiness is a non-barking dog here. Gordin is a disengaged and dispassionate historian of science – much of his work has been about Russian science and the science and politics of nuclear weapons in the postwar period – and the questions he poses about Velikovsky are meant to illuminate the condition of American science in ‘the postwar public sphere’ and to figure out what has been meant by the notion of ‘pseudoscience’. The Velikovsky affair was at once a long-running episode of surpassing strangeness and, Gordin says, ‘ground zero’ in a series of Cold War era ‘pseudoscience wars’. Understanding the pathological is here meant to encourage a new perspective on the normal.
Scientists in the years after World War Two were upset by Velikovsky because, Gordin argues, they felt insecure, uncertain of the new authority and influence they had apparently gained by building the bomb and winning the war. Enormous amounts of government money had been dumped on them and government agencies designed to ensure the support of even basic research had been established, with unprecedented arrangements allowing the recipients of government largesse to determine its distribution. Yet there were reasons to be fearful, and in Cold War American culture there was more than enough fear to go around. Some forms of fear specially afflicted scientists. First, there was concern that political support might translate into political control. There were the Marxists – not all that many, of course, in America – who had actively worked for the organised planning and direction of scientific research, and there was the cautionary tale of genetics in the Soviet Union, especially after 1948, when Stalin had decreed, against the canons of ‘Western bourgeois’ Mendelian genetics, that the ideas of the charlatan Ukrainian agronomist Trofim Lysenko about the inheritance of acquired characteristics should count as dogma. Lysenkoism seemed to show how vulnerable orthodox science might be to the fantasies and ideologies of those who weren’t scientists at all or whose scientific credentials had been burnished by the political powers. And there were the McCarthyite witch-hunts, some of which targeted distinguished scientists. How much autonomy did American scientists actually have? How vulnerable was that autonomy to the dictates of politicians and to the delusions of popular culture? No one could be sure. In 1964, Richard Hofstadter brilliantly described the ‘paranoid style’ of American politics: your opponents weren’t simply wrong, they were conspiring against you, mobilising dark forces to suppress free and rational thought. The joining up of psychiatry and history was in the air, like the UN’s Black Helicopters over the US – or perhaps in the cultural water, like fluoride dumped in reservoirs by alien agents.
Velikovskianism belonged to the intellectual genre known as catastrophism, the notion that sudden and massive changes, not just gradual ones, have occurred in the natural world and that the more or less uniform natural processes now observable do not constitute all the modes of change that have historically shaped the world. Darwin was a notable uniformitarian, and Velikovsky opposed Darwinism for that reason, but there is nothing inherently unscientific about catastrophism, nor did Velikovsky’s catastrophism invoke divine intervention. It was bizarre, but it was offered as a scientific (not a religious) theory about natural objects, natural events and natural powers. At a theoretical level, the objections orthodox scientists had about Velikovskianism mostly had to do with celestial mechanisms: his assertions about the insufficiency of gravitation and inertial motion to account for planetary behaviour and related claims about the significance of electromagnetic forces. The problem at a factual level was that these spectacular catastrophes were supposed to have happened quite recently, while orthodox science recognised no evidence that they had.
The greatest ingenuity of Velikovsky’s thought lay in its merging of naturalistic catastrophism and psychoanalytic theory. This allowed him to account at once for the annals of comparative myth and religion and for scientists’ resistance to his scheme, and that is the reason Worlds in Collision was offered, in Gordin’s phrase, as ‘a dream journal for humanity’. The key was what Velikovsky called ‘collective amnesia’. The catastrophes let loose on Earth by the Venus comet had so scarred the human mind that memories of them had either been erased or, more consequentially, encoded in allegory. Just as with Oedipal father-killing and mother-mating, amnesia and suppressed memory were coping mechanisms, and so a proper interpretation of ancient myth would decode the allegorical forms into which traumatic memories had been cast. At the same time, what was the violence of scientists’ opposition to Velikovsky’s ideas but a persistence of that same tendency to deny the catastrophic truth of what had happened to the human race, how very close it had come to obliteration? The fact that the scientists were leagued against him was precisely what Velikovsky’s theories predicted. It was further evidence that he was right. What the scientists needed, indeed what the culture as a whole needed, was therapy, a cure for collective amnesia.
Here are the reasons for the enormous appeal of Velikovsky’s theories to Cold War America, and, specifically, to the young, the angry and the anxious. Lecturing to campus audiences, Velikovsky told the students what they already knew: the world was not an orderly or a safe place; Armageddon had happened and could happen again:
The belief that we are living in an orderly universe, that nothing happened to this Earth and the other planets since the beginning, that nothing will happen till the end, is a wishful thinking that fills the textbooks … And so it is only wishful thinking that we are living in a safe, never perturbed, solar system and a safe, never perturbed past.
Alfred Kazin, writing in the New Yorker, understood that this was part of Velikovsky’s appeal, and tellingly linked the great pseudoscientist with the Doomsday warnings of orthodox atomic scientists: Velikovsky’s work ‘plays right into the small talk about universal destruction that is all around us now’, he said, ‘and it emphasises the growing tendency in this country to believe that the physicists’ irresponsible scare warnings must be sound.’
The counterculture emerging in 1960s and 1970s America was born from fear and bred to hope. It feared nuclear catastrophe; it was disposed to think that the military-industrial-academic complex had scant regard for preventing catastrophe or even that it was conspiring to bring it about. (In 1962, the war-gamer Herman Kahn suggested that we should begin to ‘think the unthinkable’ and work out how to fight and win a nuclear war, and in 1964 Stanley Kubrick’s Dr Strangelove made Kahn’s vision box-office.) The counterculture expressed whatever optimism it had about the future in a characteristically American psychotherapeutic idiom. So did Velikovsky. Humankind could save itself if it confronted its irrationality and the collective amnesia that was responsible for all forms of racist, social and military violence: ‘Nothing is more important for the human race than to know our past and to face it.’ Velikovsky offered both diagnosis and treatment. And if his theories were not, in themselves, religious, they so clearly pointed to political and moral consequences that one disciple cited his Velikovskianism to the draft board as a way of getting out of the Vietnam War: pacifism flowed from planetary astronomy. (The reluctant soldier happily failed his physical, not his metaphysical.)
When Velikovsky’s bizarre story about planetary hi-jinks was so energetically puffed up in 1950, the American scientific establishment was presented with a choice, a choice endemically faced by orthodoxy confronted by intellectual challenges from alien sources: do you ignore the heterodox? Do you invite it to sit down with you and have a calm and rational debate? Do you crush it? There were scientific voices counselling Olympian disdain but they were in general overruled. Still, pretending to take no notice of Velikovsky might have been the plan had Worlds in Collision not been published by Macmillan, a leading producer of scientific textbooks, and packaged not as an offering to, say, comparative mythology or as popular entertainment, but as a contribution to science. Elite scientists, notably at Harvard, reckoned that they might be able to control what Macmillan published when it was represented as science. A letter-writing campaign was organised to get Macmillan to withdraw from its agreement to publish the book; credible threats were made to boycott Macmillan textbooks; hostile reviews were arranged; questions were raised about whether the book had been peer-reviewed (it had); and, when Worlds in Collision was published anyway, further (successful) pressure was exerted to make Macmillan wash its hands of the thing and shift copyright to another publisher. The editor who had handled the book was let go, and a scientist who provided a blurb and planned a New York planetarium show based on Velikovsky’s theories – admittedly not the sharpest knife in the scientific drawer – was forced out of his museum position and never had a scientific job again.
From an uncharitable point of view, this looked like a conspiracy, a conspiracy contrived by dark forces bent on the suppression of free thought and different perspectives – and the Velikovskians took just that view. An establishment conspiracy centred on Harvard had sought to control scientific thought; the conspirators had closed minds and wanted to close others’ minds; they refused to engage with Velikovsky’s ideas at the level of evidence, to show exactly where he was wrong. When Velikovsky made specific predictions of what further observation and experiment would show, his enemies declined to undertake those observations and experiments. This was the way the Commies behaved, Velikovsky’s allies suggested. Analogies were drawn from the history of science seen as the history of martyrs to dogma. Velikovsky figured himself as Galileo and his opponents as Galileo’s critics, who wouldn’t even look through the telescope to see the moons of Jupiter with their own eyes. ‘Perhaps in the entire history of science,’ Velikovsky said, ‘there was not a case of a similar violent reaction on the part of the scientific world towards a published work.’ Newsweek wrote about the spectacle of scientific ‘Professors as Suppressors’ and the Saturday Evening Post made sport of the establishment reaction as ‘one of the signal events of this year’s “silly season”’. Some scientists who were utterly convinced that Velikovsky’s views were loopy had qualms about how the scientific community had treated him. Einstein, in whose Princeton house Velikovsky was a frequent visitor, was one of them. Interviewed just before his death by the Harvard historian of science I.B. Cohen, Einstein said that Worlds in Collision ‘really isn’t a bad book. The only trouble with it is, it is crazy.’ Yet he thought, as Cohen put it, that ‘bringing pressure to bear on a publisher to suppress a book was an evil thing to do.’
The Velikovsky affair made clear that there were radically differing conceptions of the political and intellectual constitution of a legitimate scientific community, of what it was to make and evaluate scientific knowledge. One appealing notion was that science is and ought to be a democracy, willing to consider all factual and theoretical claims, regardless of who makes them and of how they stand with respect to canons of existing belief. Challenges to orthodoxy ought to be welcomed: after all, hadn’t science been born historically through such challenges and hadn’t it progressed by means of the continual creative destruction of dogma? This, of course, was Velikovsky’s view, and it was not an easy matter for scientists in the liberal West to deny the legitimacy of that picture of scientific life. (Wasn’t this the lesson that ought to be learned from the experience of science in Nazi Germany and Stalinist Russia?) Yet living according to such ideals was impossible – nothing could be accomplished if every apparently crazy idea were to be given careful consideration – and in 1962 Thomas Kuhn’s immensely influential Structure of Scientific Revolutions commended a general picture of science in which ‘dogma’ (daringly given that name) had an essential role in science and in which ‘normal science’ rightly proceeded not through its permeability to all sorts of ideas but through a socially enforced ‘narrowing of perception’. Scientists judged new ideas to be beyond the pale not because they didn’t conform to abstract ideas about scientific values or formal notions of scientific method, but because such claims, given what scientists securely knew about the world, were implausible. Planets just didn’t behave the way Velikovsky said they did; his celestial mechanics required electromagnetic forces which just didn’t exist; the tails of comets were just not the sorts of body that could dump oil and manna on Middle Eastern deserts. A Harvard astronomer blandly noted that ‘if Dr Velikovsky is right, the rest of us are crazy.’
By 1964, some of Velikovsky’s scientific critics were drawing a different lesson from the affair: the nuclear chemist Harold Urey was concerned ‘about the lack of control in scientific publication … Today anyone can publish anything,’ and it was impossible to tell the signal of truth from the noise of imposters. We must return to the past, Urey urged, when there was a proper intellectual class system and a proper system of quality control: ‘Science has always been aristocratic.’ In a society insisting on its democratic character, that was not a wildly popular position, though doubtless it had appealed to the scientists who tried to prevent the original publication of Velikovsky’s book and who sought to block his later efforts to publish in mainstream scientific journals.
Then there was the tactic of labelling Velikovskianism ‘pseudoscience’. One of the strengths of Gordin’s book is its careful historical unpicking of what scientists had in mind, and what they were doing, when they called something pseudoscientific. Pseudoscience isn’t bad science – incompetent, shallow, containing egregious errors of fact or reasoning. (In those senses, there’s a lot of bad science around which is almost never identified as pseudoscience.) Rather, what postwar scientists meant when they called Velikovskianism pseudoscience (along with contemporary parapsychology, resurgent eugenics, Wilhelm Reich’s orgone energy theory, creationism and the fantastical world ice theory) was that these were bodies of thought that pretended to be scientific, dressing themselves up in the costumes of science, but which were not the thing they pretended to be. Pseudoscientific thought might indeed contain errors of fact and theory, but the orthodox regarded it as fundamentally misconceived.
There were attempts to spell out in exactly what ways Velikovsky had transgressed the rules of scientific method, and, while some critics satisfied themselves that they had identified those errors, there was little if any agreement about what this transgressed method was. For example, Velikovsky did make a series of specific predictions (about the temperature and chemical composition of Venus and about Jupiter as a radio source) which would have permitted his system to be empirically tested, and some of these predictions were eventually advertised as confirmed (even in major scientific journals), but it proved notoriously difficult to disentangle those specific observations – whether supposedly confirming or refuting – from a complex network of claims and assumptions. This ‘network’ character of confirmation and disconfirmation is now generally recognised as endemic to science. Einstein spoke with his usual wisdom when asked how scientists might tell by inspection whether unorthodox ideas were brilliant or barmy. He replied, with Velikovsky clearly in mind: ‘There is no objective test.’ The term ‘pseudoscientist’ is a bit like ‘heretic’. To be a pseudoscientist is to be accused; you don’t describe yourself as a pseudoscientist. (Velikovsky, indeed, was exquisitely cautious about joining a salon des refusĂ©s, disinclined to associate his cause with that of the parapsychologists and members of the other pseudoscientific tribes who identified themselves as martyrs to orthodoxy.) So there was a lot of pseudoscience about in the Cold War decades, but the category – not the content – was manufactured by orthodox scientists concerned about maintaining the boundaries of legitimacy but unable to find a stable and coherent way of defining what the category consisted of, other than its violation of valued structures of plausibility.
If pseudosciences are not scientific, neither are they anti-scientific. They flatter science by elaborate rituals of imitation, rejecting many of the facts, theories and presumptions of orthodoxy while embracing what are celebrated as the essential characteristics of science. That is at once a basis for the wide cultural appeal of pseudoscience and an extreme difficulty for those wanting to show what’s wrong with it. Velikovsky advertised his work as, so to speak, more royalist than the king. Did authentic science have masses of references and citations? There they were in Worlds in Collision. Was science meant to aim at the greatest possible explanatory scope, trawling as many disciplines as necessary in search of unified understanding? What in orthodoxy could rival Velikovsky’s integrative vision? Authentic science made specific predictions of what further observation and experiment would show. Velikovsky did too. Was science ideally open to all claimants, subjecting itself to all factual criticisms and entertaining the possibility of radically new theoretical interpretations? Who behaved more scientifically – Velikovsky or the Harvard ‘suppressors’?
Gordin sides with those – like Einstein and a number of modern sociologists and philosophers – who doubt that universal and context-independent criteria can be found reliably to distinguish the scientific from the pseudoscientific. But here is a suggestion about how one might do something, however imperfectly, however vulnerable to counter-instances and however apparently paradoxical, to get a practical grip on the difference between the genuine article and the fake. Whenever the accusation of pseudoscience is made, or wherever it is anticipated, its targets commonly respond by making elaborate displays of how scientific they really are. Pushing the weird and the implausible, they bang on about scientific method, about intellectual openness and egalitarianism, about the vital importance of seriously inspecting all counter-instances and anomalies, about the value of continual scepticism, about the necessity of replicating absolutely every claim, about the lurking subjectivity of everybody else. Call this hyperscience, a claim to scientific status that conflates the PR of science with its rather more messy, complicated and less than ideal everyday realities and that takes the PR far more seriously than do its stuck-in-the-mud orthodox opponents. Beware of hyperscience. It can be a sign that something isn’t kosher. A rule of thumb for sound inference has always been that if it looks like a duck, swims like a duck and quacks like a duck, then it probably is a duck. But there’s a corollary: if it struts around the barnyard loudly protesting that it’s a duck, that it possesses the very essence of duckness, that it’s more authentically a duck than all those other orange-billed, web-footed, swimming fowl, then you’ve got a right to be suspicious: this duck may be a quack.

Fairness is an F-word

There is an opinion piece in the Chronicle Review about fairness. The author suggests that modern-day Americans think the opposite of fairness is selfishness, whereas it ought to be favoritism. The whole article is at http://chronicle.com/article/In-Defense-of-Favoritism/135610/. Here are some core paragraphs. I put one key sentence in bold.
--ww--

Children and parents were taught something very different about envy in the 19th century. Parents taught their children to accommodate negative feelings like envy using stoic resolve. When the educational philosopher Felix Adler analyzed the biblical Cain and Abel parable, in his 1892 The Moral Instruction of Children, he exhorted young people to master and suppress their feelings of envy, or else they would end up like murderous Cain (recall that envy led Cain to kill his brother after God preferentially favored Abel's animal sacrifice). Envy was to be treated with self-discipline. There will always be people better off than you, and the sooner you accept and conquer your envy, the better off you'll be.
The social historian Susan J. Matt argues that all this changed in the 20th century, and by the 1930s a whole new childhood education regarding envy was in full swing. Social workers "praised parents who bought extra gifts for their children. If a son or daughter needed a hat, adults should buy it, but they should also purchase hats for their other offspring, whether or not they needed them. This would prevent children from envying one another."
The phenomenon of sibling rivalry made its way into the textbooks as a potentially damaging pattern of envy—one that is best addressed by giving all the kids an equal fair share of everything. Subduing or restraining one's feelings of deprivation and envy was considered old school, and new parents (living in a more prosperous nation) sought to stave off those feelings in their children by giving them more stuff.
This trend—of assuaging feelings of deprivation by distributing equal goods to children—grew even stronger in the baby-boomer era and beyond. It has also dovetailed nicely with the rise of an American consumer culture that defines the good life in part by material acquisition. "In a consumer society," Ivan Illich says, "there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy." Today's culture tries to spare kids the pains of sibling and peer rivalry, but does so by teaching them to channel their envy into the language and expectation of fairness—and a reallocation of goods that promises to redress their emotional wounds.
If our high-minded notions of retributive justice have roots in the lower emotions of revenge, then why should we be surprised if fairness has roots in envy? I have no illusions and feel entirely comfortable with the idea that fairness has origins in baser emotions like envy. But most egalitarians will find this repugnant, and damaging to their saintly and selfless version of fairness.
The merit-based critique of fairness is well known. Plato spends much of The Republic railing against democracy on the grounds that know-nothing dolts should never have equal political voice with experts (aristoi). Elitism is a dirty word in our culture, but not for the ancients.
American hostility to elitism is especially manifest during election seasons, when politicians work hard to downplay their own intelligence and intellectual accomplishments so they might seem less threatening (less eggheadish) to the public. I am in agreement with many of the merit-based critiques of egalitarian fairness. I don't want my political leaders to be "regular guys." I want them to be elite in knowledge and wisdom. I want them to be exceptional.
Our contemporary hunger for equality can border on the comical. When my son came home from school with a fancy ribbon, I was filled with pride to discover that he had won a footrace. While I was heaping praise on him, he interrupted to correct me. "No, it wasn't just me," he explained. "We all won the race!" He impatiently educated me. He wasn't first or second or third—he couldn't even remember what place he took. Everyone who ran the race was told that they had won and were all given the same ribbon. "Well, you can't all win a race," I explained to him, ever-supportive father that I am. "That doesn't even make sense." He simply held up his purple ribbon and raised his eyebrows at me, as if to say, "You are thus refuted."
I don't want my son and every other kid in his class to be told they'd "won" the footrace at school just because we think their self-esteem can't handle the truth. Equal rewards for unequal achievements foster the dogma of fairness, but they don't improve my son or the other students.
The contrast of our fairness system with merit-based Chinese preschool is astounding. Imagine your 4-year-old preschooler getting up the nerve to stand in front of her class to tell a story. It's a sweet rite of passage that many children enjoy around the world, and it builds self-esteem and confidence. Now imagine that when your preschooler is finished spinning her yarn, the other children tell her that her story was way too boring. One kid points out that he couldn't understand it, another kid says her voice was much too quiet, another says she paused too many times, and another tells her that her story had a terrible ending. In most schools around the world, this scenario would produce a traumatic and tearful episode, but not so in China, where collective criticism is par for the course—even in preschool.
At Daguan Elementary School, in Kunming, China, this daily gantlet is called the "Story Teller King." American teachers who saw this exercise were horrified by it. But it is indicative of Chinese merit-based culture.
# # #

Matternet

Look! Up in the sky! It's a bird, it's a plane, no, it's matternet.

From this week's Economist:

THE spread of mobile phones in developing countries in the past decade has delivered enormous social and economic benefits. By providing a substitute for travel, phones can make up for bad roads and poor transport infrastructure, helping traders find better prices and boosting entrepreneurship. But although information can be delivered by phone—and, in a growing number of countries, money transferred as well—there are some things that must be delivered physically. For small items that are needed urgently, such as medicines, why not use drone helicopters to deliver them, bypassing the need for roads altogether?
That, at least, was the idea cooked up last year at Singularity University, a Silicon Valley summer school where eager entrepreneurs gather in the hope of solving humanity’s grandest challenges with new technologies. The plan is to build a network of autonomously controlled, multi-rotor unmanned aerial vehicles (UAVs) to carry small packages of a standardised size. Rather than having a drone carry each package directly from sender to recipient, which could involve a long journey beyond the drone’s flying range, the idea is to build a network of base stations, each no more than 10km (6 miles) from the next, with drones carrying packages between them.
After arrival at a station, a drone would swap its depleted battery pack for a fully charged one before proceeding to the next station. The routing of drones and the allocation of specific packages to specific drones would all be handled automatically, and deliveries would thus be possible over a wide area using a series of hops. It is, in short, a physical implementation of the “packet switching” model that directs data across the internet, which is why its creators call their scheme the “matternet”.
Over the matternet, so the vision goes, hospitals could send urgent medicines to remote clinics more quickly than they could via roads, and blood samples could be sent and returned within hours. A farmer could place an order for a new tractor part by text message and pay for it via mobile money-transfer. A supplier many miles away would then take the part to the local matternet station for airborne dispatch via drone.
Mind over matter
Andreas Raptopoulos, the entrepreneur who led the academic team, reckons that the scheme would be competitive with building all-weather roads. A case study of the Maseru district of Lesotho put the cost of a network of 50 base-stations and 150 drones at $900,000, compared with $1m for a 2km, one-lane road. The advantage of roads, however, is that they can carry heavy goods and people, whereas matternet drones would be limited to payloads of 2kg in a standard 10-litre container. But the scheme is potentially lifesaving in remote areas, and might also have commercial potential to deliver small packages in the rich world.
Since the original proposal, however, an ideological disagreement has emerged over how best to implement this drone-powered internet for objects. Two separate groups are now taking rather different approaches. The first, led by Mr Raptopoulos, has formed a company, called Matternet, to develop the drone and base-station hardware, and the software that will co-ordinate them. The company then hopes to sell the technology to government health departments and non-profit groups. Just as mobile phones have spurred development in poor countries, Mr Raptopoulos hopes drone delivery will do something similar.
The second group is called Aria (“autonomous roadless intelligent array”). It believes the matternet should be free, open and based on standardised protocols, just like the internet. It is developing these protocols and building prototypes that adhere to them, and inviting others to follow suit. Aria is not promoting any particular use of the technology, and will not necessarily build or run networks itself. “We understand there will be hundreds of applications, but we are not interested in running such applications,” says Arturo Pelayo, Aria’s co-founder. “We won’t aim for understanding every single geographical and cultural context where the system might be used.”
Both groups have recently started testing their first prototypes. Matternet ran a series of successful field tests of its prototype UAVs in the Dominican Republic and Haiti in September, and met local groups to sell the idea. Meanwhile, Aria also spent the summer testing, and showcased its ideas, such as the use of retrofitted shipping containers for base stations, at the Burning Man festival held in the Nevada desert in August. Flying drones in high winds without crashing into anyone presented quite a challenge.
For the delivery of drugs in developing countries, a rider on a motorbike may be a much simpler and more rugged solution. Maintaining a network of drones—a complex, immature technology—is unlikely to be easy, particularly in the remote areas that Matternet intends to target. It may be that congested city centres in rich countries will prove a more promising market.
And whether in the rich or poor world, any widespread deployment of delivery-drone fleets is bound to raise concerns about safety and regulation. It is undoubtedly a clever idea. But moving packets of data around inside the predictable environment of a computer network is one thing; moving objects around in the real world is, you might say, a very different matter.
# # #

Game over, Tech wins

I ran across an interesting article in Eurozine (http://www.eurozine.com/articles/2012-11-16-vargasllosa-en.html) in which sociologist Gilles Lipovetsky debates Nobel laureate Mario Vargas Llosa on points from Llosa's new book "Civilization of the Spectacle". Since C. P. Snow, people have been debating his two cultures, science vs. the arts. The article declares that the battle has to be recast, because it has entered a phase in which technology is dominant. I'm not advocating that you read the whole article; you've heard all the arguments before. The bottom line is: technology has replaced fine arts as the force that elevates mankind. Something for technologists to ponder.
--ww--

Lipovetsky: "What was noble culture, high culture, for the Moderns? Culture represented the new absolute. As the Moderns began to develop scientific and democratic society, the German Romantics created a form of religion through art, whose mission was to contribute what neither religion nor science were providing, because science simply describes things. Art became something sacred. In the seventeenth and eighteenth centuries the poet – and artists in general – were those who showed the way, who said what religion was saying earlier.

When we observe what culture is in the world of consumption, in the world of the spectacle – what you aptly call the "civilization of the spectacle" – is precisely the collapse of that Romantic model. Culture becomes a unit of consumption. We're no longer waiting for culture to change life, change the world, as Rimbaud thought. That was the task of the poets, such as Baudelaire, who rejected the world of the utilitarian. They believed that high culture was what could change man, change life. Today, nobody can possibly believe that high culture is going to change the world. In fact, on that score it's the society of entertainment, of the spectacle, that's won. What we expect from culture is entertainment, a slightly elevated form of amusement; but what changes life today is basically capitalism, technology. And culture turns out to be the crowning glory of all this."

Paradigm shifts: how scientists really work

The Structure of Scientific Revolutions at Fifty

Fifty years ago, Thomas Kuhn, then a professor at the University of California, Berkeley, released a thin volume entitled The Structure of Scientific Revolutions. Kuhn challenged the traditional view of science as an accumulation of objective facts toward an ever more truthful understanding of nature. Instead, he argued, what scientists discover depends to a large extent on the sorts of questions they ask, which in turn depend in part on scientists’ philosophical commitments. Sometimes, the dominant scientific way of looking at the world becomes obviously riddled with problems; this can provoke radical and irreversible scientific revolutions that Kuhn dubbed “paradigm shifts” — introducing a term that has been much used and abused. Paradigm shifts interrupt the linear progression of knowledge by changing how scientists view the world, the questions they ask of it, and the tools they use to understand it. Since scientists’ worldview after a paradigm shift is so radically different from the one that came before, the two cannot be compared according to a mutual conception of reality. Kuhn concluded that the path of science through these revolutions is not necessarily toward truth but merely away from previous error.
Kuhn’s thesis has been hotly debated among historians and philosophers of science since it first appeared. The book and its disparate interpretations have given rise to ongoing disagreements over the nature of science, the possibility of progress, and the availability of truth. For some, Kuhn was a relativist, a prophet of postmodernism who considered truth a social construct built on the outlook of a community at a specific point in history. For others, Kuhn was an authoritarian whose work legitimized science as an elitist power structure. Still others considered him neither a relativist nor an authoritarian, but simply misunderstood. Kuhn’s work was ultimately an examination of the borders between the scientific and the metaphysical, and between the scientific community and society at large. As he discovered, these boundaries are not always clear. It behooves us to bear this in mind as we take the occasion of the fiftieth anniversary to revisit his book and the controversies surrounding it.
Thomas Samuel Kuhn was born in Cincinnati in 1922. He attended Harvard — where his father, a hydraulic engineer, had also studied — and earned a bachelor’s degree in physics in 1943. After graduating, he became a junior researcher on radar, first at Harvard and then in Europe at the U.S. Office of Scientific Research and Development (OSRD). It was in these jobs that he became close with James B. Conant, who served as both president of Harvard and the head of OSRD. After the war, Kuhn returned to academic life at Harvard, receiving a Ph.D. in physics in 1949, and continuing on to teach the history of science. But the Harvard faculty denied him tenure in 1956, after which he left for Berkeley, where he was eventually made a full professor of the history of science in 1961. He never returned to physics professionally. By 1964, he had made his way to Princeton, and ended his career at M.I.T. as a professor of philosophy, where he retired in 1991. But it was at Berkeley, in 1962, that Kuhn published the work that was to mark his career, and the course of inquiry in the philosophy of science, from that point on: The Structure of Scientific Revolutions.
The earliest seeds that would grow into Kuhn’s famous book were planted when he was a doctoral student in 1947. Conant tasked Kuhn with giving a series of lectures on seventeenth-century theories of mechanics. It was during the preparation of these lectures that Kuhn first began to develop his ideas. He sought to grasp exactly why Newton had discovered the laws of motion, and why it had taken mankind so long to do that, considering that Aristotle’s theories about motion had been so manifestly wrong. Moreover, Kuhn was confused about why Aristotle had been so wrong, when he had gotten much of biology and social science so right.
One summer day, it occurred to Kuhn rather suddenly that Aristotle had been operating from within a completely different framework of physics than the modern understanding. For Aristotle, the growing of a child into an adult was a similar process to that of a rock falling to the ground: each is moving toward its natural end, the place and state where it belongs. Contrary to Newtonian physics, Kuhn later explained in the preface to his 1977 collection The Essential Tension, “position itself was ... a quality in Aristotle’s physics, and a body that changed its position therefore remained the same body only in the problematic sense that the child is the individual it becomes. In a universe where qualities were primary, motion was necessarily a change-of-state rather than a state.” This idea germinated in Kuhn’s mind as he continued his doctoral work, and later formed part of the basis for The Structure of Scientific Revolutions.
The argument of Structure is not especially complicated. Kuhn held that the historical process of science is divided into three stages: a “normal” stage, followed by “crisis” and then “revolutionary” stages. The normal stage is characterized by a strong agreement among scientists on what is and is not scientific practice. In this stage, scientists largely agree on what are the questions that need answers. Indeed, only problems that are recognized as potentially having solutions are considered scientific. So it is in the normal stage that we see science progress not toward better questions but better answers. The beginning of this period is usually marked by a solution that serves as an example, a paradigm, for further research. (This is just one of many ways in which Kuhn uses the word “paradigm” in Structure.)
A crisis occurs when an existing theory involves so many unsolved puzzles, or “anomalies,” that its explanatory ability becomes questionable. Scientists begin to consider entirely new ways of examining the data, and there is a lack of consensus on which questions are important scientifically. Problems that had previously been left to other, non-scientific fields may now come into view as potentially scientific.
Eventually, a new exemplary solution emerges. This new solution will be “incommensurable” — another key term in Kuhn’s thesis — with the former paradigm, meaning not only that the two paradigms are mutually conflicting, but that they are asking different questions, and to some extent speaking different scientific languages. Such a revolution inaugurates a new period of normal science. Thus normal science can be understood as a period of “puzzle-solving” or “mopping-up” after the discovery or elucidation of a paradigm-shifting theory. The theory is applied in different contexts, using different variables, to fully flesh out its implications. But since every paradigm has its flaws, progress in normal science is always toward the point of another crisis.
Kuhn relies heavily on a “particularly famous case of paradigm change”: the sixteenth- and seventeenth-century debate over whether the sun goes around the earth or the earth around the sun. (This had been the subject of Kuhn’s previous book, The Copernican Revolution [1957].) Before Copernicus, Ptolemy conceived of a universe with the earth at its center. The celestial spheres wrapped around the earth like the layers of an onion, although how exactly they rested on each other so smoothly — the theory was that their natural motion in the ether was rotation — remained unknown. Ptolemy and his followers saw that the stars, the planets, the moon, and the sun all appeared to revolve in one direction around the earth in a regular order, and the exceptions — like the occasions when some planets seemed to move backwards in the sky — could be explained away. For over a thousand years, this was the dominant European conception of the universe. The model worked well for most of the questions that were asked of it; it could be used to predict future celestial movements, and as a practical matter, there was little reason to doubt it. In this “normal” stage of science, the mopping-up process was one of refining the data for more accurate predictions in the future.
But there will always be facts and circumstances any given theory cannot explain. “By the early sixteenth century,” Kuhn writes in Structure, “an increasing number of Europe’s best astronomers were recognizing that the astronomical paradigm was failing in application to its own traditional problems” — not to mention outside pressures related to calendar reform and growing medieval criticism of Aristotle. As the unexplainables began to mount, the Ptolemaic paradigm moved into a state of crisis. The Copernican Revolution was the result — a new theoretical framework that could incorporate the contradictory data into a coherent structure by putting the sun at the center of the cosmos. In Kuhn’s view, Copernicus and Galileo were on the tail end of the mopping-up era of Ptolemaic astronomy; Copernicus was not intentionally overthrowing the existing model, but the way he interpreted the data was simply inconsistent with an earth-centered universe. In spite of subsequent efforts by others, such as Tycho Brahe, to synthesize the two theories, they were incompatible.
If a paradigm is “destined to win its fight, the number and strength of the persuasive arguments in its favor will increase.” After a new theory is established, it attracts new supporters, often including younger scientists and perhaps the originating theorist’s students. Meanwhile, Kuhn writes, “those unwilling or unable to accommodate their work” to the new theory “have often simply stayed in the departments of philosophy from which so many of the special sciences have been spawned.” Older scientists have trouble adjusting to the new paradigm, in part because it puts their own work in doubt. Eventually, they are ignored. Kuhn quotes Max Planck, who famously wrote that “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
Over time, there again comes to be almost unanimous agreement on the validity of the predominant theory — it achieves paradigmatic status. Scientists tacitly assume agreement on the meanings of technical terms, and develop a shared and specialized technical vocabulary to facilitate data accumulation and organization. They establish journals dedicated to their scientific field, begin to cross-reference one another, and scrutinize each other’s work according to whether or not it conforms to the theory. Their students, likewise, learn to approach problems in the same way they do, much as an apprentice learns from a master. Normal science has resumed and the cycle begins anew.
It was important for Kuhn that his conception of the history and process of science was not the same as that of scientific progress. He maintained that the process of science was similar to biological evolution — not necessarily evolution toward anything, only away from previous error. In this way, Kuhn was rather skeptical about the idea of progress at all. This was the most controversial aspect of his thesis, the one that most concerned the contemporary critics of Structure, on the basis of which they accused — or celebrated — Kuhn as a champion of relativism. As University of Toronto philosophy professor Ian Hacking notes in an introductory essay prepended to the new fiftieth-anniversary edition of Structure, Kuhn’s notion that science moves away from previous error
seems to call in question the overarching notion of science as aiming at the truth about the universe. The thought that there is one and only one complete true account of everything is deep in the Western tradition.... In popular versions of Jewish, Christian, and Muslim cosmology, there is one true and complete account of everything, namely what God knows. (He knows about the death of the least sparrow.)
This image gets transposed to fundamental physics, many of whose practitioners, who might proudly proclaim themselves to be atheists, take for granted that there just is, waiting to be discovered, one full and complete account of nature. If you think that makes sense, then it offers itself as an ideal towards which the sciences are progressing. Hence Kuhn’s progress away from will seem totally misguided.
For Kuhn, a paradigm shift is fundamentally not a scientific but a philosophical change, because the incommensurability of paradigms means that there is no external stance from which one can be shown to be superior to another. Kuhn explains, “The men who called Copernicus mad because he proclaimed that the earth moved ... were not either just wrong or quite wrong. Part of what they meant by ‘earth’ was fixed position. Their earth, at least, could not be moved.” To say that the heliocentric model is true and that the geocentric model is false is to ignore the fact that the two models mean quite different things by the term “earth.”
But science has long been understood as a progressive accumulation of knowledge, not a mere shift from one worldview to another, like the gestalt shift between perceiving a duck or a rabbit in the famous diagram that Kuhn liked to use for illustration. And so Structure was received by many as a denial of the existence of absolute truth. If competing paradigms are both comprehensible, yet are incommensurable, can they not both be true? And if they are both true, who is to be the final arbiter of truth?
Many took Kuhn’s thesis to be a reduction of science to power struggles between competing views. Kuhn himself rejected this interpretation — although his attempts to do so sometimes ended up lending support in form to what they rejected in words: The physicist Freeman Dyson recounts in his 2006 book The Scientist as Rebel that he once attended a conference at which Kuhn’s disciples were repeating these exaggerated interpretations of his thesis, and “Kuhn interrupted them by shouting from the back of the hall with overwhelming volume, ‘One thing you people need to understand: I am not a Kuhnian.’”
Structure had taken on a life of its own. As Kuhn stated in a 1991 interview with science journalist John Horgan, “For Christ’s sake, if I had my choice of having written the book or not having written it, I would choose to have written it. But there have certainly been aspects involving considerable upset about the response to it.” As Hacking notes, a number of critics argued that the first edition was terribly vague. One reviewer in 1966 criticized Kuhn for using the word “paradigm” in twenty-one different senses in the book. Hacking also notes the strikingly ambivalent language that Kuhn often employs, using phrases like “we may want to say” and “[this] may make us wish to say” instead of offering assertions outright, leaving him open to criticism that he was unclear or hedging his argument.
Kuhn was also criticized for building a wall between basic science (that is, science conducted for its own sake) and applied science (that is, science aimed at achieving specific, often socially important, goals). Against Bacon’s dictum that the proper aim of science is “the relief of man’s estate,” Kuhn argued that scientists in the “normal” stage must ignore “socially important problems” and should instead just focus on solving puzzles within the paradigm. In other words, problems that must be solved to improve human life but cannot be solved by the methods of a given paradigm are a distraction from the work necessary during the “normal” phase of science. This suggests that scientists must cloister themselves, at least to an extent, in order to make progress within the confines of their paradigm. Moreover, as Steve Fuller, professor of sociology at the University of Warwick, notes in Thomas Kuhn: A Philosophical History for Our Times (2000), Kuhn felt that a paradigm should be “sheltered from relentless criticism in its early stages.” So not only can a paradigm “insulate the community” of scientists from the demands of society, in Kuhn’s words, but scientists must in turn insulate the paradigm from harsh criticism.
Kuhn was left having to do some “mopping up” of his own, which he attempted in the years after Structure was published. For example, in a 1973 lecture (collected in The Essential Tension), Kuhn sought to counter the charge that he was a relativist. He argued that some theories and paradigms are better than others, based on five rational criteria: accuracy, consistency, scope, simplicity, and fruitfulness. Much later, in the 1991 interview with Horgan, Kuhn insisted
that he did not mean to be condescending by using terms such as “mopping up” or “puzzle-solving” to describe what most scientists do. “It was meant to be descriptive.” He ruminated a bit. “Maybe I should have said more about the glories that result from puzzle solving, but I thought I was doing that.”
Continuity in a paradigm is not necessarily a bad thing, Kuhn explained in his later years; indeed, it enables scientists to organize the greater and greater amounts of knowledge that grow through the cumulative process of scientific inquiry.
Criticisms aside, whether Kuhn even deserves full credit for the ideas put forth in his seminal work has rightly been questioned. As early as the mid-1940s, the Hungarian-British scientist-philosopher Michael Polanyi had published very similar ideas about the significance of scientists’ personal commitments to a framework of beliefs and the role of learning by example in scientific training. As Kuhn later admitted, he became familiar with those works during his studies under Conant, and through a talk that Polanyi delivered and Kuhn attended in 1958. Polanyi’s most extensive work on the subject, Personal Knowledge, was published the same year. In the early 1960s, Kuhn explicitly described his own thought as closely aligned with that of Polanyi, but he did not mention his name in Structure, except for a brief footnote in the first edition and an additional mention in the 1970 second edition. When Polanyi struggled to receive recognition for his thoughts independently of Kuhn’s, Kuhn admitted in private correspondence that he might owe “a major debt” to the older scholar. But shortly before Kuhn’s death (and long after Polanyi’s), he revised those concessions and claimed that Polanyi had not in fact had a great influence on him, and that he had delayed reading Personal Knowledge until after finishing Structure out of a fear that he “would have to go back to first principles and start over again, and I wasn’t going to do that.”
Despite the fact that Polanyi’s work preceded Kuhn’s and was more philosophically rigorous, it was Kuhn whose book became a bestseller and whose terminology entered contemporary parlance. Steve Fuller notes “many Kuhn-like ideas were ‘in the air’ both before and during the time Structure was written,” often from better-known philosophers. Perhaps Kuhn simply hit not only on the right ideas, but more importantly on the right distillation of them, and the right terminology, at the right time.
The reader of Kuhn’s work is struck by his extensive focus on the physical sciences, and the dearth of attention to biology and the social sciences. To some extent, this is hardly surprising, given Kuhn’s background as a theoretical physicist. But it is also true that the public prominence of the physical sciences in the first half of the twentieth century and the early periods of the Cold War provided a unique window into the community of scientists and the patterns by which scientific theory develops.
What Kuhn noticed was that competing paradigms in physics never coexist for very long, and that progress in normal science occurs precisely when scientists work within only one paradigm. But the social sciences are a special kind of science, because they cannot set aside fundamental philosophical concerns as easily as the physical sciences. Moreover, the social sciences are defined by multiple paradigms that are sometimes mutually contradictory. Kuhn pointed out that some social sciences may never be able to enter the paradigmatic stage of normal science for that reason. Unlike physical scientists, social scientists generally cannot in the face of a disagreement revert to an agreed-upon exemplary solution to a problem; their controversies are precisely about what the exemplar ought to be. The social sciences are grounded on competing views of what the world is and should be: certain basic concepts, such as “the state,” “institutions,” or “identity,” cannot be defined by consensus. Competing paradigms — such as those of Marxist, Keynesian, and Hayekian economists — will continue to coexist. So there necessarily will be limits to what the social sciences can achieve, since the lack of unanimity inevitably means that arguments turn on questions of theory, rather than on the application of theory. In addition, since it is more difficult in the social sciences to carry out true experiments and test counterfactuals, the social sciences are inhibited from closely following the model of the physical sciences. And the passage of time is a relevant factor. As social scientist Wolfgang Streeck explains, “What has historically happened cannot be undone — which also means that there can never be an exact return to a past condition, as the memory of what happened in between will always be present. A military dictatorship that has returned after having overthrown a democracy is not the same as a military dictatorship following, say, a foreign occupation.”
Despite these criticisms, many social scientists embraced — or perhaps appropriated — Kuhn’s thesis. It enabled them to elevate the status of their work. The social sciences could never hope to meet the high standards of empirical experimentation and verifiability that the influential school of thought called positivism demanded of the sciences. But Kuhn proposed a different standard, by which science is actually defined by a shared commitment among scientists to a paradigm wherein they refine and apply their theories. Although Kuhn himself denied the social sciences the status of paradigmatic science because of their lack of consensus on a dominant paradigm, social scientists argued that his thesis could still apply to each of those competing paradigms individually. This allowed social scientists to claim that their work was scientific in much the way Kuhn described physics to be.
Disagreements over what counts as science, and how society can hold scientists in any field accountable to a standard of truth, became most heated in the aftermath of a debate between Kuhn and the philosopher Karl Popper. The now-famous debate between Kuhn and the older and far more seasoned Popper took place in London on July 13, 1965. Although no particularly significant exchange between the two took place either before or after this encounter, their disagreement is commonly featured in textbooks and college courses as a major event in the development of the philosophy of science in the twentieth century. The popular view of the conflict, advanced primarily by supporters of Kuhn — the supposed winner of the debate — is that Kuhn was a revolutionary in his field who championed free inquiry, in opposition to the strict empirical and logical standards of the positivists. Popper, on the other hand, is often taken to be a quasi-positivist defender of the authority of science. But, as Steve Fuller argues in his 2003 book Kuhn vs. Popper: The Struggle for the Soul of Science, this popular conception is not only a caricature but an inversion of the truth about these two thinkers.
Popper held science to a higher standard than did Kuhn. Popper’s famous proposition was that a seemingly scientific claim, in order to be actually scientific, must be falsifiable, meaning that it is possible to devise an experiment under which the claim could be disproved. A classic example of a falsifiable science is Einsteinian physics, which made specific, well-defined predictions that could be tested through observation — as opposed to, say, Freudian psychology, which did not make well-defined predictions and proved adept at reformulating its explanations to fit observations, changing the details so as to salvage the theory.
By defining science in terms of rational criteria of empirical observation, Popper seemed to place scientific tools equally in the hands of philosophers of science, skeptics, and common persons who needed some means to question scientists who tried to back their claims by appealing to their own scientific authority. For Popper, novel scientific theories should be greeted with skepticism from the outset. But for Kuhn, one of the key characteristics of the healthy functioning of the community of scientists is its practice of singling out a successful theory from its competitors — without concern for its social implications, and in isolation from public scrutiny.
In a sense, Popper and Kuhn each saw himself as a defender of free inquiry — but their notions of free inquiry were fundamentally opposed. Kuhn’s thesis reserved free inquiry specifically for scientists, by considering legitimate whatever paradigm scientists happened to agree upon at a given time. But Popper, given his longstanding concern for the open society, thought that this idea marginalized the role of skepticism, only regarding it as important at the point of crisis, and that it thus undermined free inquiry as a methodological commitment to truth.
Popper particularly targeted the tendency among some influential social scientists to advance their political and social theories without revealing their philosophical underpinnings. Some of the great catastrophes of the twentieth century resulted from the widespread acceptance of theories that reduced society to a machine that could be steered by competent authorities. Popper’s falsification principle was meant in part to moderate the authority of social science, which — to the extent that it attempted to predict and regulate society — could lead to a passive public and technocratic governance at best, or modern serfdom and totalitarianism at worst. Kuhn himself was hardly a great booster of the social sciences. But the application of Kuhn’s ideas to social science seemed to imply that a theory, however false, should be allowed to dominate the opinion of scientists and the public until it buckles under the weight of its own flaws.
For their part, Kuhn and his followers argued that Popperian falsifiability was an impossible and historically unrealistic standard for science, and noted that any paradigm has at least a few anomalies. In fact, these anomalies are critical for determining which puzzles normal science seeks to solve. Popper’s standard, on the other hand, would seem to require scientists to be forever preoccupied with metaphysical, pre-paradigmatic arguments. But in a sense, this was the point: Popper’s insistence on falsification was precisely meant to sustain the need of the social sciences to focus on questions of first principle, so as to avoid the rise of any new dangerous philosophies falsely carrying the banner of science.
While the physical sciences were the most prominent in the public mind when Kuhn was writing Structure in the early 1960s, today biology is in ascendance. It is striking, as Hacking notes in his introductory essay, that Kuhn does not explore whether Darwin’s revolution fits within his thesis. It is far from clear that Kuhn’s thesis can adequately account for not only Darwin’s revolution but also cell theory, Mendelian or molecular genetics, or many of the other major developments in the history of biology.
The differences between physics and biology — their varying methods and metaphors — matter immensely for the way we understand ourselves and our world. Beginning in the mid-nineteenth century, the assumptions of modern science began to play a much more prominent role in political philosophy. A scientific way of thinking permeated the writings of Auguste Comte and Karl Marx, and by the end of the century, with the work of Max Weber and Émile Durkheim, the era of social science had begun in earnest. Many of the early social scientists came to view society in terms of contemporary physics; they adopted the Enlightenment belief in science as the source of progress, and considered physics the archetypical science. They understood society as a mechanism that could be engineered and adjusted. These early social scientists began to deem philosophical questions irrelevant or even inappropriate to their work, which instead became about how the mechanism of society operated and how it could be fixed. The preeminence of physics and mechanistic thinking was passed down through generations of social scientists, with qualitative characterization considered far less valuable and less “scientific” than quantitative investigations. Major social scientific theories, from behaviorism to functionalism to constructivism and beyond, tacitly think of man and society as machines and systems.
Given the dominance of physics and mechanism in social scientific thinking, the fact that Kuhn based his thesis almost exclusively on physics gave social scientists reason to consider their philosophical commitments legitimate. They saw Structure as a confirmation of their entire approach.
But in the half century since Kuhn wrote his book, biology has taken the place of physics as the dominant science — and so in the social sciences, the conception of society as a machine has gone out of vogue. Social scientists have increasingly turned to biology and ecology for possible analogies on which to build their social theories; organisms are supplanting machines as the guiding metaphor for social life. In 1991, the Journal of Evolutionary Economics was launched with an eye toward advancing a Darwinian understanding of economics, complete with genotypes and phenotypes. The justification for this kind of model is straightforward: one of the biggest difficulties for economists is the dynamism of any given economy. As Joseph Schumpeter rightly pointed out, economies change; they evolve, rather than staying fixed like a Newtonian machine with merely moving parts. Since machines do not change, whereas societies do, it is reasonable to move the study of economics away from the metaphor of systems and toward that of organisms.
A recent paper in the journal Theory in Biosciences perfectly encapsulates the desire for a more biological perspective in the social sciences, arguing for “Taking Evolution Seriously in Political Science.” The paper outlines the deterministic dangers in the view of social systems as Newtonian machines, as well as the problems posed by the reductionist belief that elements of social systems can be catalogued and analyzed. By contrast, the paper argues that approaching social sciences from an evolutionary perspective is more appropriate philosophically, as well as more effective for scientific explanation. This approach allows us to examine the dynamic nature of social changes and to explain more consistently which phenomena last, which disappear, and which are modified, while still confronting persistent questions, such as why particular institutions change.
This shift from a mechanistic to an evolutionary model seems like a step in the right direction. The new model aims less at predicting the future and derives its strength instead from its apparent ability to explain a wide array of phenomena. It may be better equipped than its predecessor to account for the frequent changes in the stability of modern economies. Furthermore, a biological model can correctly recognize humans as purposeful and creative beings, whereas mechanistic models reduce people to objects that merely react to outside stimuli.
Nevertheless, a biological approach to the social sciences is reductionistic in its own way, and limited in what it can explain. Biological sciences, much like physical sciences, have been stripped of philosophical concerns, of questions regarding the soul or the meaning of life, which have been pushed off to the separate disciplines of philosophy and theology. Much of modern biology seeks to emulate physics by reducing the human organism to a complex machine: thinking becomes merely chemical potentials and electric bursts, interest and motivation become mere drives to perpetuate the genome, and love becomes little more than an illusion. Such accounts can become problematic if we consider them the only ways to understand human nature — and not least because our answers to these non-scientific questions are at the foundation of how we view the world, and so also of how we interpret scientific findings.
Every model that social scientists use, whether it is derived from physics, biology, or ecology, embodies certain philosophical assumptions about human nature and about the optimal functioning of a society. Viewing social relations as movements of a clock implies a set of beliefs quite unlike those of perceiving the same relations as functions of a cell. Since the work of social scientists is so closely tied to these philosophical concerns on which we tend to disagree, we usually see a number of models compete for acceptance at the same time. And because these metaphysical assumptions are usually unspoken, they set the stage for the competition between models to take the place of what was once an explicit competition between differing philosophical accounts of the world — only now while largely denying that any philosophical debate is taking place.
Perhaps the greatest limitation in the social sciences is that, however good a theory’s explanatory abilities, it can say very little about whether or not a particular action ought to be performed in order to bring about social change. Since human relations are the object of the social sciences, questions of ethics — about whether or not a change should be induced, who should be responsible for it, and how it should occur — must always be at the forefront. It may be desirable, for instance, to reduce alcoholism; but it does not follow that all actors, such as churches, governments, businesses, public and private mental-health experts, and the pressure of social norms, are equally responsible for undertaking the task, or can equally do so without altering society in other ways. Decisions of this sort inevitably depend on our views of the proper function of institutions and on what constitutes the well-being of society.
Regardless of whether we view society as akin to a physical machine, or a biosphere, or an organism, it remains crucial that we recognize the limitations of each model. But what we learn from Kuhn is that any science that separates itself from its philosophical bases renders itself incapable of addressing such questions even within its own limited scope.
The political philosopher Eric Voegelin, in his 1952 book The New Science of Politics, provides a helpful treatment on this point in his assessment of the fifteenth-century English judge Sir John Fortescue. Long before the current trend toward the biological sciences, Fortescue used a biological metaphor, arguing, as Voegelin writes, “that a realm must have a ruler like a body a head,” and that a political community grows into an articulate, defined body as though out of an embryo. Rulers were necessary because otherwise the community would be, in Voegelin’s words, “acephalus, headless, the trunk of a body without a head.” Yet Fortescue recognized that the analogy between an organic body and a political realm was limited: by itself, it would have provided an incomplete view of both the individual and society. He therefore introduced into his political theory the Christian notion of a corpus mysticum: society is held together not only by a head but also by an inner spiritual bond, a heart that nourishes the head as well as the rest of the body. As Voegelin puts it, however, this heart “does not serve as the identification of some member of a society with a corresponding organ of the body, but, on the contrary, it strives to show that the animating center of a social body is not to be found in any of its human members ... but is the intangible living center of the realm as a whole.”
By extending the analogy in this way, Fortescue went beyond what we now recognize as the limits of biology, and even of political science as such, in the attempt to capture a fuller sense of human nature and of a political body. Neither biology nor political science by itself would have been capable of producing any such holistic image of society. Most significantly, Fortescue understood that his borrowing from biology was merely metaphorical — and so avoided the mistake that plagues the social sciences today, of treating what is really political theory as straightforward scientific truth.
Value judgments are always at the core of the social sciences. “In the end,” wrote Irving Kristol, “the only authentic criterion for judging any economic or political system, or any set of social institutions, is this: what kind of people emerge from them?” And precisely because we differ on what kind of people should emerge from our institutions, our scientific judgments about them are inevitably tied to our value commitments.
But this is not to say that those values, or the scientific work that rests on them, cannot be publicly debated according to recognized standards. Thomas Kuhn’s thesis has often been taken to mean that choices between competing theories or paradigms are arbitrary — merely a matter of subjective taste. As noted earlier, Kuhn challenged the claim that he was a relativist in a 1973 lecture, offering a list of five standards by which we may defend the superiority of one theory over another: accuracy, consistency, scope, simplicity, and fruitfulness. What these criteria precisely mean, how they apply to a given theory, and how they rank in priority are themselves questions subject to dispute by scientists committed to opposing theories. But it is the existence of recognized standards, even if the standards are open to debate, that allows any judgment to be available for public discussion. And we may add that if social scientists recognize the same standards, then debates over their meaning, application, and priority are harder to settle than in physics because the social sciences are intertwined with philosophical questions that are themselves concerned with what our standards of rationality ought to be.
The lasting value of Kuhn’s thesis in The Structure of Scientific Revolutions is that it reminds us that any science, however apparently purified of the taint of philosophical speculation, is nevertheless embedded in a philosophical framework — and that the great success of physics and biology is due not to their actual independence from philosophy but rather to physicists’ and biologists’ dismissal of it. Those who are inclined to take this dismissal as meaning that philosophy is dead altogether, or has been replaced by science, will do well to recognize the force by which Kuhn’s thesis opposes this stance: History has repeatedly demonstrated that periods of progress in normal science — when philosophy seems to be moot — may be long and steady, but they lead to a time when non-scientific, philosophical questions again become paramount.
One persisting trouble with Kuhn’s classic work is that its narrow focus left too many questions unanswered — including the question not just of what science is but of what science should be. Here many other philosophers of science, including Popper, offered not just descriptions of science but powerful prescriptions for it. Kuhn’s work is largely silent on the value of science and the wellbeing of society, and entirely silent on the wrongheadedness of blindly accepting scientific authority and discarding the philosophical questions that must always come first, even when we pretend otherwise.
Although Kuhn, who died in 1996, was sometimes stung by the criticism he received, he understood the importance of all the poking and prodding. In his 1973 lecture, he argued that “scientists may always be asked to explain their choices, to exhibit the bases of their judgments. Such judgments are eminently discussable, and the man who refuses to discuss his own cannot expect to be taken seriously.” Even the great Einstein, who failed to give a full defense for his skepticism of the fundamental randomness posited by quantum theory, became somewhat marginalized later in his career. Kuhn deserves the respect of the rigorous criticism that has come his way. It is fitting that his provocative thesis has faced blistering scrutiny — and remarkable that it has survived to instruct and vex us five decades later.


Matthew C. Rees is a graduate student in International Relations and European Studies at Masaryk University in Brno, Czech Republic.