Saturday, November 30, 2013

How I get all my students to be good at math

Quartz.com published a nice article about how to get everyone to be good at math. One important observation is to realize the ways in which test-taking can undermine math education.

"As a mathematics educator for the last seven years, I can attest that most folks believe they either are or are not “math people.” And that idea of innate math ability is very harmful to both those who believe they possess it and to those who believe they don’t. Furthermore, our new era of accountability (read test-taking) perpetuates this fallacy and clouds the message we want our students to receive in math class.
Not all the same: Algebra is not the same as geometry
There is most certainly no such ability that allows some students to pass algebra and others to fail. This argument is made forcefully and articulately in Noah Smith and Miles Kimball’s recent article in Quartz, so I won’t rehash the point except to say that math draws on a huge range of cognitive processes. Those who are weak at arithmetic can be very good at abstract mathematics. Many students who hate algebra love geometry. Each student I have taught, including those designated as gifted and those diagnosed with severe learning difficulties, has had strengths that helped him or her learn some topics in mathematics easily, and has had weaknesses that made learning other topics in math incredibly difficult. I tell all students alike that math requires perseverance and a willingness to take risks and make mistakes. These qualities are much more predictive of mathematical success than innate ability (if such a thing exists). Students often place themselves in the “not math people” category before they even know that each math teacher teaches it differently, or that different topics in math draw on different skills.
Overconfidence is just as harmful
How else is the belief harmful? For those who believe they are not “math people,” it makes them feel helpless. Math requires effort, patience and time. You have to believe that eventually, you will be able to understand. You have to sort through what you understand and what you don’t. You have to then formulate a good question and be courageous enough to ask the teacher to answer the question in front of a classroom, admitting that you don’t understand something in front of your peers—some of whom groan and say, “But it’s sooo easy. How come you don’t get it!?” It’s so much easier just to say,” Hey, I wasn’t born with this mystical mathematics ability so it’s not my fault.” But you would be wrong.
Math can make even the smartest people feel dumb and believing in an intrinsic mathematics ability isolates the stupidity. It’s not me who’s stupid, it’s just one part of my brain that’s stupid—and there’s nothing I can do about that part of my brain so I don’t need to humiliate myself in class tomorrow by asking the question I want to ask. People with this mindset keep getting further and further behind in math because if they don’t feel themselves capable of learning something, they’ve let themselves off the hook and invest their mental energy in other realms.
Further, those who think of themselves “math people” can suffer from overconfidence. Recently I’ve been seeing a lot students (often male) claiming they know and understand everything so they don’t need to practice or take notes. Their homework and test grades are still abysmal. I believe some of this overconfidence comes from the testing they’ve had to endure. These kids are bright and have learned test-taking skills easily so their standardized test scores are consistently high. But truly understanding math requires effort and perseverance and isn’t worth the time if they can still perform well on tests. I have four students in one of my classes who frequently cite their high test scores as an excuse for not doing homework, not taking notes, or performing poorly on free-response tests. They believe themselves “math people” because they’ve been successful on the one measure the state cares about and they see no need to put more effort in.
Do we judge? Or do we educate?
Standardized testing helps perpetuate this fallacy on multiple fronts. There are two diametrically opposed goals we as a society have for public education and because we can’t choose which goal is more worthy of attainment, we’ve been bogged down in ideological battles. Schools can be about sorting, or about educating. I think right now we want them to be about both, but this is impossible. Schools focused on sorting are obsessed with fairness. Only students who produce “A” work (as arbitrarily decided by the teacher or the state) get “A’s.” Honor roll highlights the best students, standardized tests rank schools and designate children as “below basic,” “basic,” “proficient,” or “advanced.” These rankings make it easier for colleges and employers to sort out the students they want from those they don’t. In a world where schools are designed to sort out those who are going to college, those who are going into vocations, those who are going into unskilled labor and those who are going to prison, belief in innate math ability is appropriate. Those who have it are sorted into more difficult math classes, those who don’t stop taking math as soon as possible. We test, judge and sort. Theoretically, the fear that they will be sorted into an undesirable category is what keeps some students motivated.
Our educational goal should be to help all students learn as much and as deeply as they possibly can, and to instill in them a love of learning. In Diane Ravitch’s new book, Reign of Error, she cites a study conducted in the 1980s by Richard J. Murnane and David K. Cohen. Teachers given poor evaluations performedmore poorly, while those given positive evaluations, even if undeserved, worked harder and were more willing to seek help for their problems. Though this study was about teachers, I don’t see why it can’t apply to students as well. When I give students a lot of credit for persevering showing them that their efforts are valuable early in the term, I get higher quality work from them as the term progresses because they don’t give up when things get hard. Using grades to encourage perseverance rather than to sort or judge means that I don’t need to inflate their grades at the end because the accumulated effort they’ve put in has allowed them to truly master the material.
There’s a lot of research (see Daniel Pink’s TED talk on the puzzle of motivation) that says people perform poorly on difficult cognitive tasks when there are extrinsic rewards for the successful accomplishment of those tasks. Students tend to have more difficulty thinking in math when they’re under time constraints, extrinsic pressure or are fearful of being judged. My goal is to remove these pressures to help students perform better.
Here’s an example demonstrating that rewarding students based on effort encourages them to continue to work hard. I taught a student for three years, through pre-algebra, algebra and geometry. She struggled in pre-algebra and algebra 1, almost always failing tests the first time and only passing thanks to credit for consistently doing homework, coming in for tutoring, and doing test corrections. She slowly increased in confidence and became a very capable math student in geometry (the best in the class at proofs). Here’s an e-mail she wrote me after I left:
I wanted to thank you. Even though you are not my teacher anymore, you still help me all the time. You wrote in my yearbook to remember that I am good at math, and I always go back to that and it actually helps me when I am stressed about algebra 2. Whenever I think about it, I feel as though I can push through and actually do it. I am doing pretty well in it so far and I owe part of that to you.
She continues to receive low standardized test scores because she performs poorly under pressure.
In math especially it’s easier to judge rather than to coax and reassure. Because there are always right and wrong answers, it’s so easy for standardized tests to sort students into those who can get the right answers and those who can’t. Standardized testing disregards the effort students have exerted and they deemphasize the processes of math. Students are left feeling helpless if they can’t achieve. These tests judge students based on an arbitrary benchmark set by state politicians who have little understanding of what developmentally appropriate skills truly are. Like novice chess players, students learn the rules of math and combine and manipulate them to learn how to play the game. Like a novice chess player, a math student will learn just as much if not more from her failures as from her successes. Focusing on the process of math helps both low achieving and high achieving students learn true mathematical logic and not get discouraged because they can’t reach a right answer, or bored because reaching the right answer is too easy. Many students know how to get the right answers on standardized tests but don’t know how to think about math.
Parents, don’t be a part of the problem
What can we do about helping students focus more on the process of math and on persistence, less on labeling themselves as “math people” or “non math people?” First of all, we adults need to model different behavior than I’ve seen demonstrated in my years as a teacher. The myth of innate math ability is perpetuated from generation to generation. When I tell adults I’m a math teacher, 90% of the time I get the comment, “I’m not a math person,” accompanied by a look of sheer terror and either tentative stories of math humiliation or an abrupt change of subject. Once, a visiting school guidance counselor told my students, to my horror, that he hated math and not to listen to me when I told them math was important. They would never need it in the “real world.” I’ve been writing this article in coffee shops, occasionally making comments to my husband. On three successive visits strangers have overheard my comments to him, asked me what I’m doing, then told me that there definitely are ‘non math people’ and that I’m looking at one. Upon elaborating my position though, all three changed their minds and have admitted the importance of math in their own lives (one was an industrial architect).
My students’ parents also believe in this fallacy and sometimes, perpetuate an anti-math attitude. They don’t use math at work, can’t help their students with their math homework, and are convinced themselves that they’re “not math people”. Furthermore, because these adults have survived without math, they tell their children that math isn’t necessary in the workplace. These adults have made their choices. They chose/were forced into careers where math wasn’t required and so they convince their children that only the “math people” will ever get anything out of a comprehensive mathematics education. Our job as role models is to give our students the freedom to make their own choices, including lucrative choices in fields that require math. In my education courses, we were always told that modeling is more powerful than teaching. Adults are modeling this self-defeatist attitude.
If at first you try, you will succeed
In the classroom, we as teachers need to remove some of the stress we place on students and give them the freedom to fail. So many crossword puzzle enthusiasts (my whole family) look forward to checking their solutions against the key printed in tomorrow’s newspaper while so many students dread seeing their returned math test. Why? Because the crossword puzzle enthusiast knows he will learn more about doing crossword puzzles if he checks the key carefully, whereas the math student sees the returned test as a judgment about his intelligence. Students need to see that the attempt is just as valuable as the result. In my classroom, I award students credit for all problems they attempt, regardless if they got the correct answer. I refuse to let students turn in tests until every question is answered and double checked. I penalize late work, but I always accept it because who cares when they learn, just that they learn. I admit my own mistakes and award students when they catch me in a mistake. Finally, I find it productive to simply acknowledge that learning math can be challenging; telling a struggling student that a problem is easy is one of the most dispiriting things you can say to them. Education needs to be about personal growth and teaching students to enjoy and revel in their knowledge, not on grooming students and sorting them for a job market that may be entirely different in 10 years. If students learn confidence, flexibility and that they’re good at learning, they’ll be ready for anything. "

Tuesday, August 13, 2013

Exercise: 30 min. or 30 sec.?

BBC science site has a provocative story on exercise.  The gist:
=======================
According to Dr Stuart Gray from the University of Aberdeen's musculoskeletal research programme, a key factor in reducing the likelihood of early death from cardiovascular disease could be high intensity exercise.

"The benefits do seem to be quite dramatic," he says.

He admits, though, that many in the medical establishment are still promoting moderate intensity exercise.

Gray's study has shown that short bursts of activity, such as sprinting or pedalling all-out on an exercise bike for as little as 30 seconds, result in the body getting rid of fat in the blood faster than exercising at moderate intensity, such as taking a brisk walk.

And getting rid of fat in the blood is important as it reduces the chances of suffering a heart attack.
========================
Whole thing: http://www.bbc.co.uk/news/magazine-21160526
# # #

Autopilot engaged

Nervous about flying?  Try to avoid planes with human pilots!  Here is a snippet from the BBC website's technology area.
--ww--

Aircraft manufacturer Airbus recently released its view of the future of aviation towards 2050 and beyond, and one of the things it stressed was the benefit of planes that can fly themselves. In an extreme proposal, it suggests passenger planes might fly together in flocks, which can result in huge energy savings. They would keep in sync by constantly monitoring their position relative to one another.
While everyone seems confident that the technical challenges of such visions can be overcome, there is perhaps one more significant hurdle to overcome - persuading the general public that a plane without a pilot is safe.
On that point, Professor Cummings says the data is increasingly in favour of unmanned systems. “About three years ago UAVs became safer than general aviation, meaning that more general aviation planes are crashing than UAVs, per 100,000 flight hours,” she says. “So UAVs are actually safer than a weekend pilot, flying a small plane.”
That may not be a huge surprise. But what is perhaps more telling is that last year UAVs became safer than highly trained military fighters and bombers. “I knew that was coming, and it’s one of the reasons I jumped into this field and left commercial piloting and military piloting behind,” says Prof Cummings.
# # #

Is America anti-intellectual?

I am attaching a long book review, done by an editor of the LA Review of Books, of a book that addresses the question of America's anti-intellectualism.  The reviewer gives a below-average grade to the book, partly because he seems to know more about the topic than the author.
It's a nice read, especially for anyone who grew up while Einstein was a rockstar.  The bottom line is that America both praises and belittles intellectuals.  These days, they seem to target humanities intellectuals for scorn, and sci/tech intellectuals for praise.  The Occupy movement is singled out as evidence that liberals can be especially anti-intellectual.
--ww--


======================

Is America anti-intellectual? The jury is still out. One could make equally plausible cases for our country as a hotbed of hostility to organized intelligence and as a sort of paradise for the cleverest, a place that elevates intellectual sophistication (especially when it has economic or technological applications) above basic moral decency. We oscillate wildly between demonizing our intellectuals and deifying them; they appear to us, in turn, as nuisances, threats, and saviors. We cut their funding and then study how their brains function. We trust them with our economy, our climate, our media and our institutions, then rage against them for their failures—and then trust them all over again.
Of course, much depends on what is meant by “intellectual.” The term, which originated in France and entered the English language around the time of the Dreyfus Affair, is notoriously vague and unstable. Though in its most neutral sense it describes only a tendency toward speculative thought, it is very quickly made into a social category with determinate characteristics. Put a novelist, a philosopher, a physicist, a political analyst and a computer programmer in a room together and they’re likely to discover as many differences as similarities—provided they can understand each other at all—but all can be identified (for ease of condemnation, if nothing else) under this single heading. The very notion of “the intellectual” depends on the idea that, over and above what particular people happen to know or do, there is a social category that we can enshrine, or hold culpable. Like the notion of “the aristocrat” that it has in some ways replaced, the intellectual provides a screen on which to project aspiration and hatred, idealism and cynicism.
In Inventing the Egghead: The Battle over Brainpower in American Culture, the cultural historian Aaron Lecklider explores how “Americans who were not part of the traditional intellectual class negotiated the complicated politics of intelligence within an accelerating mass culture.” Here he follows the Italian Marxist critic Antonio Gramsci, who theorized a distinction between “traditional” (bourgeois) and “organic” (working-class) intellectuals, as well as in the vein of British cultural historians Richard Hoggart (The Uses of Literacy) and Raymond Williams (The Long Revolution). Against the received idea, which he associates with historians like Richard Hofstadter and Christopher Lasch, that “the masses” have always been reflexively anti-intellectual, Lecklider argues that, throughout the twentieth century, intelligence—or “brainpower,” as he prefers to say—was in fact highly valued by working-class people in a variety of contexts.
Lecklider’s agenda is a broadly populist one: he wants to defend the American working class from allegations of anti-intellectualism, and to redescribe what might look like hostility or resentment as a form of fascinated engagement. His story begins around the turn of the century, with what he calls a “mainstreaming of intelligence”: a mass movement, mediated by popular culture, to devalue traditional intellectuals and promote organic ones. “Cultural texts consumed by millions of ordinary men and women between 1900 and 1960 suggested all Americans were intellectually gifted,” Lecklider writes, “while deflating the presumptuous grandstanding of the traditional intellectual elite.” (In this, organic intellectuals have not operated so differently from traditional intellectuals, who have never been above undermining the stature of others in order to build up their own.)
In the book’s compelling first chapter, Lecklider discusses the attempts of Coney Island theme parks like Dreamland and Luna Park to bolster their educational aspects. (Dreamland, for instance, hosted the experimental baby incubators of Dr. Martin A. Couney in 1903, and in 1909 Luna Park was “disrobed of its sugar coating” and became the Luna Park Institute of Science.) At around the same time, the Chautaqua circuit, a 1904 extension of the popular adult-education gatherings held as far back as 1874 in upstate New York, provided a sort of fin-de-siècle equivalent to today’s TED Talks. Chautaqua lectures, described by Theodore Roosevelt as “the most American thing in America,” were held on topics of political and scientific interest, with speakers ranging from William Jennings Bryan to Mascot the educated horse. “Audiences for the Chautauquas were comprised of women and men without any particular standing as intellectuals or claims to expertise,” Lecklider writes, “and though they were occasionally glossed with the narrative sheen of social uplift, the assembly programs were designed to offer education to undistinguished audiences and to imbue mass culture with a gleam of shimmering smartness.” At the same time, brainpower was being valorized and mobilized by working-class labor organizers, and by African-American leaders involved in the 1920s “New Negro” movement.
Yet most representations of traditional intellectuals consumed by working-class people were dismissive or derisive. Much of Inventing the Egghead deals with images of scientists, college students, professors, and other conspicuously intellectual types gleaned from the popular culture of the first half of the twentieth century. Lecklider finds that before 1920, when “less than 5 percent of eighteen- to twenty-four-year-olds were enrolled in college … college students and college life were largely maligned within popular culture representations.” Tin Pan Alley tunes like “There’s a Lot of Things You Never Learn at School” (1902) and vaudeville skits like The Freshman and the Sophomore (1907) routinely mocked the uselessness of academic knowledge, even suggesting that the attainment of higher education would be likely to depress one’s earning power. “The brainpower of the working class was being built up even as the value of education was being knocked down,” Lecklider writes, “and a critical tool for evening social inequalities was disparaged as frivolous but also accessible”:
Education’s value was reduced to conspicuous superficiality, and it took the intelligence of the working class to uncover this terrible ruse. At the same time, seeing through the mystique of education’s allure made brainpower into a common stock, a step that had the effect of rendering education accessible and desirable.
Lecklider realizes that there’s something self-defeating about this dynamic, in which brainpower is prized while education is despised: “Representations of college students as privileged, decadent, unambitious, intellectually challenged, and frivolous,” he writes, “thus walked a fine line between deriding the unmistakable class privilege implicit in the nation’s unequal educational system and praising the real social benefits of higher education that had the potential to redress precisely this inequality.” As long as the project of higher education was seen as nothing but a “terrible ruse,” campaigns for a more equitable distribution of intellectual resources were virtually dead in the water.
While intellectuals were often satirized as sexless or antisocial, there was also a strong strain of class envy in popular denunciations of higher education, often expressed as sexual jealousy. College professors and students alike were assumed to be priapic hedonists, and were presented as such in popular songs like “Watch the Professor” — a sexualization of the educational experience which proceeds apace today, the horny teacher still being a master trope of modern porn. Such songs, Lecklider suggests, served a dual purpose: they provided an imaginary resolution of class conflict along the axis of male solidarity (all men, no matter how highly educated, are really only interested in one thing), as well as giving expression to a sense of embittered social frustration (those college guys get all the girls).
This ambiguity as to whether intellectuals were impotent wimps or powerful predators persists in representations of famous intellectuals of the time, as Lecklider shows in his fascinating discussion of the figure of Albert Einstein, whose theory of relativity was an improbable cause célèbre in 1920s America. Einstein, Lecklider points out, “was rarely depicted alongside other scientists. Instead he was represented accompanying political figures, ordinary women and men, or, most often, alone.” He was commonly depicted as an American immigrant, and as a Swiss Jew rather than a German, and he “was made to seem somehow feminine in the manly world of physics,” his femininity somehow defusing popular anxiety about the potential threat posed by his scientific brilliance. Einstein’s disheveled hairstyle and cultured tastes got him tagged immediately as a “long-hair,” a term that, in the twenties, connoted both effeminacy and excessive sophistication.
The most provocative thing about Einstein, though, was neither his foreignness nor his femininity but the inscrutability of his theory. “The urgent desire among nonspecialized audiences to understand relativity was in part driven by a populist impulse toward obscuring the division between the educated elite and the common masses,” Lecklider writes. “Particularly troubling was the fact that Einstein’s theory of relativity openly defied common sense; it implied that ordinary people’s observations were always wrong. … Ordinary Americans were instructed that they could not believe their own eyes.” Newspaper accounts of Einstein’s visits to America tried to assuage this insecurity by emphasizing that elite scientists, too, were “agog” at the theory of relativity.
Ultimately, it was the progress of science, of course — and not the pure theoretical science of relativity, but practical applied science, especially when those applications were to business or industry — that changed ordinary Americans’ opinions on the value of intellectuals. (Lecklider is not very attentive to distinctions between humanities and the sciences; I will return to this in a moment.) “Under scientific management,” Lecklider observes, “the cultural value of brainpower assumed a distinctly capitalist hue.” In 1915, the Australian physicist T. Brailsford Robertson published an article in Scientific Monthly entitled “The Cash Value of Scientific Research.” Around the same time, Frederick Winslow Taylor’s principles of “scientific management” began to decisively influence the world of industry. While this incursion of science into the working day did much to encourage working-class anti-intellectualism—science being associated with management, and thus viewed with suspicion—it also consolidated the link, in the public imagination, between intellect and power.
Later still, in the 1930s, the country’s dire economic circumstances forced a revaluation of expert intelligence, as the same mental forces that had driven and directed capitalism were prevailed upon to deal with its collapse. “Brainpower was understood to be an important tool for emerging from the Depression,” Lecklider states. For the moment, at least, ordinary Americans’ terror of disaster trumped their hatred of elitism:
Though fears persisted concerning the potential for concentrated brainpower to create divisions between smart and ordinary Americans or to set up an intellectual class that disavowed the putatively populist foundation for American democratic praxis, this potential conflict was increasingly depicted as a small price to pay for emerging from the Depression as quickly as possible.
Thus one unlikely result of “the Red Decade,” according to Lecklider, was that formerly suspicious proletarians learned to stop worrying and love elite intellectuals, or at least to trust them for the time being. The justification for brainpower shifted again in the 1940s, when it was “reframed as an essential weapon for victory in World War II,” the lawyers, economists, and political scientists of Roosevelt’s brain trust being replaced in the public’s affections by nuclear physicists like J. Robert Oppenheimer and his colleagues on the Manhattan Project (to which Lecklider devotes an entire chapter). At the same time that the masses were getting behind the elites, the elites began to see, with renewed clarity, the point of educating the masses, since “[i]ntelligent Americans were less likely to be seduced by fascism, and brainpower was positioned as critical to a broader antifascist project.” This logic carried over to the 1950s as well, with communism taking the place of fascism.
Once brainpower was reconceived as a basic building block of American dominance, the terms by which it was appraised as a social characteristic inevitably changed. Lecklider registers this change on a lexical level, by chronicling the rise of the curious word “egghead” in the postwar US vernacular. “Before the 1950s, ‘egghead’ was an innocuous term referring to nothing more troubling than a person who possessed a bald, oval head.” Its invidious modern use dates to approximately 1952, when the right-wing columnist Stewart Alsop used it to describe the supporters of Presidential candidate Adlai Stevenson. (Stevenson took the teasing in stride, playing on the continuity of anti-intellectualism and anti-communism by quipping: “Eggheads of the world, unite! You have nothing to lose but your yolks!”) The term took off quickly in popular culture: among the examples discussed are Frank Fenton’s 1954 science fiction story “The Chicken or the Egghead,” pop singer Jill Corey’s 1956 single “Egghead,” a 1957 Saturday Evening Post article about Princeton, New Jersey entitled “I Live Among the Eggheads,” and Molly Thatcher Kazan (wife of Elia)’s 1957 play The Egghead. (Somehow Lecklider omits the Marvel Comics supervillain Egghead, who first appeared in a 1962 issue of Tales to Astonish.)
The transition from “long-hair” to “egghead” as a favored term of anti-intellectual abuse deserves some scrutiny. The two terms aren’t quite congruent; a decent 21st century equivalent of the former might be “hipster,” with its implication of pretense and smug self-superiority, while an equivalent to the second would be “nerd” or “geek,” connoting haplessness and myopic concentration on a single area of expertise. Both are terms of derision, but the derision is of a qualitatively different kind. As Lecklider notes, the feminine qualities of the long-hair are stripped away from the egghead almost entirely: “The egghead’s smoothness was made phallic, engendering the egghead as male. The egghead’s delicate fragility and curvy roundness became feminine features, though not explicitly female. The oval shape of the egg translated into male baldness.” More to the point, a deliberate sartorial choice (wearing one’s hair long) gave way to a biological property (the shape of one’s cranium), thus stripping away any sense of agency that once attached to the intellectual life. Eggheads, like nerds, were helplessly intellectual; they didn’t choose to be the way they are, but simply made the best of it.
Most importantly—and Lecklider fails to make as much of this as he might—the passage from “long-hair” to “egghead” indexed a postwar shift from the humanities to science as the master discourse of American intellectualism (and thus, logically, anti-intellectualism as well). Long-hairs lounged around and loved classical music, fine art and literature; eggheads, meanwhile, toiled away at fearsomely difficult calculations and solved fundamental mysteries of the universe. While both were ridiculous, the former were more contemptible, because their sophistication benefited only themselves, whereas the latter produced something for the common good, even if they appeared impenetrable or insufferable in the process.
What the passage from “long-hair” to “egghead” signified, perhaps, was not a change in the relation between traditional and organic intellectuals, but the division of the traditional intellectual class into “the two cultures,” decried by C.P. Snow in his famous 1959 lecture of that title. And as intellectuals were divided, so, inevitably, were anti-intellectuals. If, in the 1950s, the working class was no longer able to “position [itself] as intelligent … by deflating the intellectual pretensions of others,” it may have been because they had developed strategies to deflate a wholly different kind of intellectual. The cultural long-hair’s terrible ruse could be seen through and derided, while the scientific egghead’s implacable logic could only be accepted or ignored. Ordinary Americans could imagine, and desire, a world without long-hairs and their decadent speculations; but a world without the productive brainpower of eggheads seemed, at that point, to be unthinkable.
Lecklider’s book convinces me that the representational shift in the 1950s from “long-hair” to “egghead” is indeed an important one—but it is one that Lecklider doesn’t quite do justice to, in part because he elides the distinction between scientific and humanist intellectuals. He also perhaps overstates the reactionary quality of postwar anti-intellectualism. “The egghead embodied the paradoxes of brainpower in the Cold War,” he writes, “by exploiting the contradiction between a cultural desire to possess the social and political power associated with white men and a growing anxiety about the influence of the left, homosexuality, and the black civil rights movement in shaping American life.” As the above passage makes clear, Lecklider reads the “egghead” first and foremost as a conservative trope: in his view, “the popular use of the egghead label served chiefly to restore the ideological hegemony of a virulently white, masculine liberalism.” But it’s too convenient to pretend that scorn for eggheads, particularly in the realms of science and politics, has come only from the racist, sexist right. Certainly there’s a strong tradition, which Lecklider does not discuss, of anti-technocratic sentiment on the egalitarian left which can at times verge on anti-intellectualism, exemplified by the writings of sociologists like C. Wright Mills; we see it today, most obviously, in the neo-anarchism of David Graeber, Rebecca Solnit, and others associated with the Occupy movement.
Furthermore, popular culture—the vehicle, Lecklider shows, of so many struggles over brainpower and its representation in the first half of the twentieth century—has today been thoroughly colonized by the egghead, and by “geek culture.” The crowd-pleasing anti-intellectualism of The Freshman and the Sophomore has given way to the cuddly eggheads of The Big Bang Theory and the startup-worship of films like The Internship. Again, these developments have much to do with the rise of the scientific and technological elite and the relative decline of the humanist one. If the paradigmatic intellectual of the 20s was the artist and of the 50s the scientist, today it’s the tech CEO. (It seems worth noting that, in our own time, there has been little to no populist resentment of Silicon Valley or the tech industry.)
All of these developments are outside of Lecklider’s remit; his interest is in a particular historical period, and yet it’s worth pondering the relevance of his narrative to our own moment, especially since Inventing the Egghead ends with something like a call to arms. “Reclaiming the history of an organic intellectual tradition in American culture represents a starting point for envisioning intelligence as a shared commodity across social classes,” Lecklider writes; “wrested from the hands of the intellectuals, there’s no telling what the brainpower of the people has the potential to accomplish.” The people, yes; but in our time, hasn’t Glenn Beck, say, wrested brainpower from the hands of the intellectuals? Lecklider begins his book with early 21st century debates over the intelligence of George W. Bush, but he makes no mention of the Tea Party or modern libertarianism—arguably the closest thing the US has to a populist movement in thought. Criticizing Hofstadter, Lasch, and Lewis Coser, Lecklider writes that “1960s historians prescribed a healthy dose of intellectuals to resolve the social problems of Cold War-era disorder.” Yet he himself seems to prescribe a healthy dose of anti-intellectuals to resolve our own problems, a solution that seems every bit as naïve, and perhaps more dangerous.
Inventing the Egghead is a provocative, intelligent, but ultimately unsatisfying book. Lecklider’s intention to recover and valorize a history of twentieth-century working-class intellectualism keeps him from asking more fundamental questions about the role of the intellectual in American society. Furthermore, his use of “popular culture” as a site of reflection of working-class belief and desire is highly problematic; he never once mentions the fact that many of the songs, books, and films he discusses were composed or crafted by middle-class artists, or that they were consumed and enjoyed by people of different classes. (Even traditional intellectuals have been known to indulge now and then.) If there’s a lesson in the archive that Lecklider so usefully assembles for us in Inventing the Egghead, it’s that “the intellectual” is a joint creation of the haves and have-nots, the clergy and the laymen, and a site of struggle as well as collaboration and agreement. We shouldn’t speak, then, of wresting anything from anyone’s hands, but of thinking more about what we hold in common.
 
Evan Kindley is an editor for the LA Review of Books. View all posts by Evan Kindley → This entry was posted in Culture, Reviews and tagged , , , . Bookmark the permalink. Tags: , , ,

What scientists do and why we need them

Here is the text of a delightful review of two books on science history.  The reviewer commands us to read the books so we learn "what scientists do and why we need them", but you can get the gist of that from the review itself.  As you know, the War on Curiosity is still raging, so I added emphasis to a sentence about that.
--ww--

Curiosity: How Science Became Interested in Everything, By Philip Ball, University of Chicago Press, 465 pp., $35
Brilliant Blunders: From Darwin to Einstein—Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe, By Mario Livio, Simon & Schuster, 341 pp., $26
Aristotle called it aimless and witless. St. Augustine condemned it as a disease. The ancient Greeks blamed it for Pandora’s unleashing destruction on the world. And one early Christian leader even pinned the fall of Lucifer himself on idle, intemperate, unrestrained curiosity.
Today, the exploration of new places and new ideas seems self-evidently a good thing. For much of human history, though, priests, politicians, and philosophers cast a suspicious eye on curious folks. It wasn’t just that staring at rainbows all day or pulling apart insects’ wings seemed weird, even childish. It also represented a colossal waste of time, which could be better spent building the economy or reading the Bible. Philip Ball explains in his thought-provoking new book, Curiosity, that only in the 1600s did society start to sanction (or at least tolerate) the pursuit of idle interests. And as much as any other factor, Ball argues, that shift led to the rise of modern science.
We normally think about the early opposition to science as simple religious bias. But “natural philosophy” (as science was then known) also faced serious philosophical objections, especially about the trustworthiness of the knowledge obtained. For instance, Galileo used a telescope to discover both the craters on our moon and the existence of moons orbiting Jupiter. These discoveries demonstrated, contra the ancient Greeks, that not all heavenly bodies were perfect spheres and that not all of them orbited Earth. Galileo’s conclusions, however, relied on a huge assumption—that his telescope provided a true picture of the heavens. How could he know, his critics protested, that optical instruments didn’t garble or distort as much as they revealed? It’s a valid point.
Another debate revolved around what now seems like an uncontroversial idea: that scientists should perform experiments. The sticking point was that experiments, almost by definition, explore nature under artificial conditions. But if you want to understand nature, shouldn’t the conditions be as natural as possible—free from human interference? Perhaps the results of experiments were no more reliable than testimony extracted from witnesses under torture.
Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.
Ultimately, Curiosity is a Big Ideas book. Although Newton, Galileo, and others play important roles, Ball wants to provide a comprehensive account of early natural philosophy, and that means delving into dozens of other, minor thinkers. In contrast, Mario Livio’s topsy-turvy book, Brilliant Blunders, focuses on Big Names in science history. It’s a telling difference that whereas Ball’s book, like a Russian novel, needs an appendix with a cast of characters, Livio’s characters usually go by one name—Darwin, Kelvin, Pauling, Hoyle, and Einstein.
Livio’s book is topsy-turvy because, rather than repeat the obvious—these were some smart dudes—he examines infamous mistakes they made. He also indulges in some not always convincing armchair psychology to determine how each man’s temperament made him prone to commit the errors he did.
For those of us who, when reading about such thinkers, can’t help but compare our own pitiful intellects with theirs, this focus on mistakes is both encouraging and discouraging. It’s encouraging because their mistakes remind us that they were fallible, full of the same blind spots and foibles we all have. It’s discouraging because, even at their dumbest, these scientists did incredible work. Indeed, Livio argues that their “brilliant blunders” ended up benefiting science overall.
Take Kelvin’s error. During William Thomson Kelvin’s heyday in the later 1800s, various groups of scientists had an enormous row over the age of Earth, in large part because Darwin’s theory of natural selection seemed to require eons upon eons of time. Unfortunately, geologists provided little clarity here: they could date fossils and rock strata only relatively, not absolutely, so their estimates varied wildly. Into this vacuum stepped Kelvin, a mathematical physicist who studied heat. Kelvin knew that Earth had probably been a hot, molten liquid in the past. So if he could determine Earth’s initial temperature, its current temperature, and its rate of cooling, he could calculate its age. His initial estimate was 20 million years.

For various reasons, Kelvin’s answer fell short by two orders of magnitude (the current estimate is 4.5 billion years). Worse, Kelvin used his calculation to bash Darwinism, a vendetta that ended up tarnishing his reputation. Nevertheless, his precisely quantified arguments forced geologists to sharpen their own work in order to rebut him, and eventually they too began to think of Earth as a mechanical system. A nemesis can bring out the best in people, and Kelvin’s mistake proved a net good for science.
Ball’s and Livio’s books help answer an important question: why bother reading science history? Scientists themselves, after all, are notoriously uninterested in the subject, probably for good reason. Science proceeds by discarding unworkable ideas, and every hour spent poring over arcane theories is time not spent refining your own experiments. But as Ball points out, old debates have a way of reemerging in modern guises. For instance, early objections to natural philosophy—the “unnatural” experiments, the prodigal waste of money on expensive toys—echo modern objections to, say, genetically modified food and the Large Hadron Collider.
Similarly, Livio shows how Einstein’s blunder has risen, phoenixlike, in recent years. In forming his theory of general relativity, Einstein added a so-called cosmological constant to one of his field equations: a repulsive force that countered gravity and (somewhat like air pressure) kept the universe from collapsing in upon itself. Einstein later struck the constant out, discarding it as ugly, ad hoc, and unnecessary. But two teams of scientists resurrected it in the 1990s to explain why our universe is expanding faster than we once realized. On cosmic scales, Einstein’s once-discarded constant may be the dominant force in the universe. (See how frustrating this is? Even when he was wrong, Einstein was right!)
Reading science history might not fix the bugs in your equipment or help you secure a new grant, but it can provide a larger perspective on what scientists do and why we need them. Science history doesn’t give all the answers, but it does help explain why we seek the answers in the first place.
# # #

No tipping

Here's a blog post that I found on Quartz (qz.com) that addresses one of my pet peeves.  After admiring the Japanese for regarding tipping as insulting, and after admiring Europeans for building in fixed tip amounts, I have to wonder why America is so backward.
--ww--

Tipping, as a compensation scheme, is great for everyone.
Restaurant customers like tipping because it puts them in the driver’s seat. As a diner, you control your experience, using the power of your tip to make sure your server works hard for you.
Restaurant servers like tipping because it means their talent is rewarded. As a great server, you get paid more than your peers, because you are a better worker.
Restaurant owners like tipping because it means they don’t have to pay for managers to closely supervise their servers. With customers using tips to enforce good service, owners can be confident that servers will do their best work.
There’s only one problem: none of this is actually true. I know because I ran the experiment myself.
For over eight years, I was the owner and operator of San Diego’s farm-to-table restaurant The Linkery, until we closed it this summer to move to San Francisco. At first, we ran the Linkery like every other restaurant in America, letting tips provide compensation and motivation for our team. In our second year, however, we tired of the tip system, and we eliminated tipping from our restaurant. We instead applied a straight 18% service charge to all dining-in checks, and refused to accept any further payment. We became the first and, for years, the only table-service restaurant in America where you couldn’t pay more money than the amount we charged you.
You can guess what happened. Our service improved, our revenue went up, and both our business and our employees made more money. Here’s why:
  • Researchers have found (pdf) that customers don’t actually vary their tips much according to service. Instead they tip mostly the same every time, according to their personal habits.
  • Tipped servers, in turn, learn that service quality isn’t particularly important to their revenue. Instead they are rewarded for maximizing the number of guests they serve, even though that degrades service quality.
  • Furthermore, servers in tipping environments learn to profile guests (pdf), and attend mainly to those who fit the stereotypes of good tippers. This may increase the server’s earnings, while creating negative experiences for the many restaurant customers who are women, ethnic minorities, elderly or from foreign countries.
  • On the occasions when a server is punished for poor service by a customer withholding a standard tip, the server can keep that information to himself. While the customer thinks she is sending a message, that message never makes it to a manager, and the problem is never addressed.
You can see that tipping promotes and facilitates bad service. It gives servers the choice between doing their best work and making the most money. While most servers choose to do their best work, making them choose one or the other is bad business.
By removing tipping from the Linkery, we aligned ourselves with every other business model in America. Servers and management could work together toward one goal: giving all of our guests the best possible experience. When we did it well, we all made more money. As you can imagine, it was easy for us to find people who wanted to work in this environment, with clear goals and rewards for succeeding as a team.
Maybe it wouldn’t work in every restaurant, in every city. Maybe the fact that it worked so well for us was due to some unique set of circumstances. Then again, other service industries like health care and law aren’t exactly lining up to adopt tips as their primary method of compensation. So maybe we’re all just being suckered into believing tipping works.
It’s something you can think about, at least, the next time you’re waiting on a refill of iced tea.
# # #

Tuesday, July 23, 2013

Secret of life

Have high expectations for yourself and low expectations for everyone else.
If you have low expectations for other people, then their average performance will delight you. Suppose you eat at the finest restaurant in the world. If your expectations are too high, then you will be disappointed, even though you just ate the finest food in the world.
If you have high expectations for yourself, then you will accomplish everything that you are capable of accomplishing. Those who accomplish much are rewarded, and that is how you get the wherewithal to eat at the finest restaurants.

Tuesday, June 25, 2013

What we should really worry about: infection.

This response to Edge.org's 2013 annual question impressed me greatly.
A leading scientist of the 21st century for Genomic Sciences; Co-Founder, Chairman, Synthetic Genomics, Inc.; Founder, J. Craig Venter Institute; Author, A Life Decoded
What—Me Worry?

As a scientist, an optimist, an atheist and an alpha male I don't worry. As a scientist I explore and seek understanding of the world (s) around me and in me. As an optimist I wake up each morning with a new start on all my endeavors with hope and excitement. As an atheist I know I only have the time between my birth and my death to accomplish something meaningful. As an alpha male I believe I can and do work to solve problems and change the world.
There are many problems confronting humanity including providing enough food, water, housing, medicine and fuel for our ever-expanding population. I firmly believe that only science can provide solutions for these challenges, but the adoption of these ideas will depend on the will of governments and individuals.
I am somewhat of a Libertarian in that I do not want nor need the government to dictate what I can or cannot do in order to guarantee my safety. For example I ride motorcycles, sometimes at high speeds; I have full medical coverage and should not be required by the government to wear a helmet to avoid doing harm to myself if I crash. I actually do wear a helmet, as well as full safety gear, because I choose to protect myself. Smoking is in a different category. Smoking is clearly deleterious to one's health and the single event that a smoker can do to change their medical outcomes is to quit smoking. If that is all there was to it, then the government should not regulate smoking unless it is paying for the health care of the smokers. However, science has clearly shown that second hand smoke can have negative health consequences on individuals in the vicinity of the smoker. Therefore laws and rules to regulate where people can smoke are in my view not only reasonable but are good for society as a whole.
It's the same with vaccinations. One of the consequences of our ever-expanding, global population particularly when coupled with poor public health, unclean water and misuse of antibiotics, has and will be new emerging infections including those from emerging zoonotic outbreaks. Over the past several decades we have seen the emergence of AIDS, SARS, West Nile, new flu strains and Methicillin-resistant Staphylococcus aureus (MRSA). In 2007 MRSA deaths in the USA surpassed HIV deaths. Infectious disease is now the second cause of death in the world right behind heart disease and ahead of cancer. Last year in the US, there were twice as many deaths from antibiotic resistance than from automobile accidents.
There are many causes for the emergence of infectious diseases but one significant factor is human behavior when it comes to immunizations. The scientifically-proven, false link between immunizations and autism has led to some parents choosing not to vaccinate their children, believing this to be a civil liberty issue akin to the choice to wear a motorcycle helmet. However, I contend that individuals who avoid immunizations are a major contributing factor to the reemergence and spread of infectious disease in a way that is far more dangerous than second hand smoke. Vaccines are the most effective means of prevention the spread of infectious diseases. There are no better examples than the elimination of polio and small pox through mandatory vaccinations.
When new or old infectious agents such as viruses and bacteria can infect the non-immunized, genetic recombination can occur to create new versions of disease agents that can now infect the population that had been immunized against the existing strains. We see this occurring with almost every type of infectious pathogen and most troubling we are seeing it here in our own industrialized, wealthy, educated country. There are pockets of outbreaks of diseases such as whooping cough, the emergence in the Middle East of a novel disease-causing coronavirus; illness at Yosemite National Park caused by Hantavirus; and the emergence in farm communities of a variant influenza virus (H3N2v) that spread from swine to people. This year's flu has come earlier and appears more virulent than in previous years. Boston has recently declared a state of medical emergency because of the number of flu cases and deaths.
Avoidance of vaccination creates a public health hazard. It is not a civil liberty issue. The un-vaccinated coupled with antibiotic resistance and decreased animal habitats promoting zoonotic transfer of disease causing agents is a potential disaster that could take humanity back to the pre-antibiotic era. I thought we learned these lessons after global pandemics such as the Plague and the outbreak of 1918 flu that killed 3% of the population, but clearly without modern science and medicine we will be destined to relive history.
================
If Venter is correct, then the enemies of the USA are not armies that we battle with tanks or navies that we battle with fleets, but American housewives who buy antiseptic hand-soap.  What weapons should the Defense Department be creating to fight their ignorance?

Wednesday, February 27, 2013

Why everyone can be creative

How To Be Creative
The image of the 'creative type' is a myth. Jonah Lehrer on why anyone can innovate—and why a hot shower, a cold beer or a trip to your colleague's desk might be the key to your next big idea.
Creativity can seem like magic. We look at people like Steve Jobs and Bob Dylan, and we conclude that they must possess supernatural powers denied to mere mortals like us, gifts that allow them to imagine what has never existed before. They're "creative types." We're not.
Description: http://m.wsj.net/video/20120309/030912lehrer/030912lehrer_512x288.jpg
The myth of the "creative type" is just that--a myth, argues Jonah Lehrer. In an interview with WSJ's Gary Rosen he explains the evidence suggesting everyone has the potential to be the next Milton Glaser or Yo-Yo Ma.
But creativity is not magic, and there's no such thing as a creative type. Creativity is not a trait that we inherit in our genes or a blessing bestowed by the angels. It's a skill. Anyone can learn to be creative and to get better at it. New research is shedding light on what allows people to develop world-changing products and to solve the toughest problems. A surprisingly concrete set of lessons has emerged about what creativity is and how to spark it in ourselves and our work.
The science of creativity is relatively new. Until the Enlightenment, acts of imagination were always equated with higher powers. Being creative meant channeling the muses, giving voice to the gods. ("Inspiration" literally means "breathed upon.") Even in modern times, scientists have paid little attention to the sources of creativity.
But over the past decade, that has begun to change. Imagination was once thought to be a single thing, separate from other kinds of cognition. The latest research suggests that this assumption is false. It turns out that we use "creativity" as a catchall term for a variety of cognitive tools, each of which applies to particular sorts of problems and is coaxed to action in a particular way.
Description: CREATING0310jp
It isn't a trait that we inherit in our genes or a blessing bestowed on us by the angels. It's a skill that anyone can learn and work to improve.
Does the challenge that we're facing require a moment of insight, a sudden leap in consciousness? Or can it be solved gradually, one piece at a time? The answer often determines whether we should drink a beer to relax or hop ourselves up on Red Bull, whether we take a long shower or stay late at the office.
The new research also suggests how best to approach the thorniest problems. We tend to assume that experts are the creative geniuses in their own fields. But big breakthroughs often depend on the naive daring of outsiders. For prompting creativity, few things are as important as time devoted to cross-pollination with fields outside our areas of expertise.
Let's start with the hardest problems, those challenges that at first blush seem impossible. Such problems are typically solved (if they are solved at all) in a moment of insight.
Consider the case of Arthur Fry, an engineer at 3M in the paper products division. In the winter of 1974, Mr. Fry attended a presentation by Sheldon Silver, an engineer working on adhesives. Mr. Silver had developed an extremely weak glue, a paste so feeble it could barely hold two pieces of paper together. Like everyone else in the room, Mr. Fry patiently listened to the presentation and then failed to come up with any practical applications for the compound. What good, after all, is a glue that doesn't stick?
On a frigid Sunday morning, however, the paste would re-enter Mr. Fry's thoughts, albeit in a rather unlikely context. He sang in the church choir and liked to put little pieces of paper in the hymnal to mark the songs he was supposed to sing. Unfortunately, the little pieces of paper often fell out, forcing Mr. Fry to spend the service frantically thumbing through the book, looking for the right page. It seemed like an unfixable problem, one of those ordinary hassles that we're forced to live with.
But then, during a particularly tedious sermon, Mr. Fry had an epiphany. He suddenly realized how he might make use of that weak glue: It could be applied to paper to create a reusable bookmark! Because the adhesive was barely sticky, it would adhere to the page but wouldn't tear it when removed. That revelation in the church would eventually result in one of the most widely used office products in the world: the Post-it Note.
Mr. Fry's invention was a classic moment of insight. Though such events seem to spring from nowhere, as if the cortex is surprising us with a breakthrough, scientists have begun studying how they occur. They do this by giving people "insight" puzzles, like the one that follows, and watching what happens in the brain:
A man has married 20 women in a small town. All of the women are still alive, and none of them is divorced. The man has broken no laws. Who is the man?
If you solved the question, the solution probably came to you in an incandescent flash: The man is a priest. Research led by Mark Beeman and John Kounios has identified where that flash probably came from. In the seconds before the insight appears, a brain area called the superior anterior temporal gyrus (aSTG) exhibits a sharp spike in activity. This region, located on the surface of the right hemisphere, excels at drawing together distantly related information, which is precisely what's needed when working on a hard creative problem.
Interestingly, Mr. Beeman and his colleagues have found that certain factors make people much more likely to have an insight, better able to detect the answers generated by the aSTG. For instance, exposing subjects to a short, humorous video—the scientists use a clip of Robin Williams doing stand-up—boosts the average success rate by about 20%.
Alcohol also works. Earlier this year, researchers at the University of Illinois at Chicago compared performance on insight puzzles between sober and intoxicated students. The scientists gave the subjects a battery of word problems known as remote associates, in which people have to find one additional word that goes with a triad of words. Here's a sample problem:
Pine Crab Sauce
In this case, the answer is "apple." (The compound words are pineapple, crab apple and apple sauce.) Drunk students solved nearly 30% more of these word problems than their sober peers.
What explains the creative benefits of relaxation and booze? The answer involves the surprising advantage of not paying attention. Although we live in an age that worships focus—we are always forcing ourselves to concentrate, chugging caffeine—this approach can inhibit the imagination. We might be focused, but we're probably focused on the wrong answer.
And this is why relaxation helps: It isn't until we're soothed in the shower or distracted by the stand-up comic that we're able to turn the spotlight of attention inward, eavesdropping on all those random associations unfolding in the far reaches of the brain's right hemisphere. When we need an insight, those associations are often the source of the answer.
This research also explains why so many major breakthroughs happen in the unlikeliest of places, whether it's Archimedes in the bathtub or the physicist Richard Feynman scribbling equations in a strip club, as he was known to do. It reveals the wisdom of Google putting ping-pong tables in the lobby and confirms the practical benefits of daydreaming. As Einstein once declared, "Creativity is the residue of time wasted."
Of course, not every creative challenge requires an epiphany; a relaxing shower won't solve every problem. Sometimes, we just need to keep on working, resisting the temptation of a beer-fueled nap.
There is nothing fun about this kind of creativity, which consists mostly of sweat and failure. It's the red pen on the page and the discarded sketch, the trashed prototype and the failed first draft. Nietzsche referred to this as the "rejecting process," noting that while creators like to brag about their big epiphanies, their everyday reality was much less romantic. "All great artists and thinkers are great workers," he wrote.
This relentless form of creativity is nicely exemplified by the legendary graphic designer Milton Glaser, who engraved the slogan "Art is Work" above his office door. Mr. Glaser's most famous design is a tribute to this work ethic. In 1975, he accepted an intimidating assignment: to create a new ad campaign that would rehabilitate the image of New York City, which at the time was falling apart.
Mr. Glaser began by experimenting with fonts, laying out the tourist slogan in a variety of friendly typefaces. After a few weeks of work, he settled on a charming design, with "I Love New York" in cursive, set against a plain white background. His proposal was quickly approved. "Everybody liked it," Mr. Glaser says. "And if I were a normal person, I'd stop thinking about the project. But I can't. Something about it just doesn't feel right."
So Mr. Glaser continued to ruminate on the design, devoting hours to a project that was supposedly finished. And then, after another few days of work, he was sitting in a taxi, stuck in midtown traffic. "I often carry spare pieces of paper in my pocket, and so I get the paper out and I start to draw," he remembers. "And I'm thinking and drawing and then I get it. I see the whole design in my head. I see the typeface and the big round red heart smack dab in the middle. I know that this is how it should go."
The logo that Mr. Glaser imagined in traffic has since become one of the most widely imitated works of graphic art in the world. And he only discovered the design because he refused to stop thinking about it.
But this raises an obvious question: If different kinds of creative problems benefit from different kinds of creative thinking, how can we ensure that we're thinking in the right way at the right time? When should we daydream and go for a relaxing stroll, and when should we keep on sketching and toying with possibilities?
The good news is that the human mind has a surprising natural ability to assess the kind of creativity we need. Researchers call these intuitions "feelings of knowing," and they occur when we suspect that we can find the answer, if only we keep on thinking. Numerous studies have demonstrated that, when it comes to problems that don't require insights, the mind is remarkably adept at assessing the likelihood that a problem can be solved—knowing whether we're getting "warmer" or not, without knowing the solution.
This ability to calculate progress is an important part of the creative process. When we don't feel that we're getting closer to the answer—we've hit the wall, so to speak—we probably need an insight. If there is no feeling of knowing, the most productive thing we can do is forget about work for a while. But when those feelings of knowing are telling us that we're getting close, we need to keep on struggling.
Of course, both moment-of-insight problems and nose-to-the-grindstone problems assume that we have the answers to the creative problems we're trying to solve somewhere in our heads. They're both just a matter of getting those answers out. Another kind of creative problem, though, is when you don't have the right kind of raw material kicking around in your head. If you're trying to be more creative, one of the most important things you can do is increase the volume and diversity of the information to which you are exposed.
Steve Jobs famously declared that "creativity is just connecting things." Although we think of inventors as dreaming up breakthroughs out of thin air, Mr. Jobs was pointing out that even the most far-fetched concepts are usually just new combinations of stuff that already exists. Under Mr. Jobs's leadership, for instance, Apple didn't invent MP3 players or tablet computers—the company just made them better, adding design features that were new to the product category.
And it isn't just Apple. The history of innovation bears out Mr. Jobs's theory. The Wright Brothers transferred their background as bicycle manufacturers to the invention of the airplane; their first flying craft was, in many respects, just a bicycle with wings. Johannes Gutenberg transformed his knowledge of wine presses into a printing machine capable of mass-producing words. Or look at Google: Larry Page and Sergey Brin came up with their famous search algorithm by applying the ranking method used for academic articles (more citations equals more influence) to the sprawl of the Internet.
How can people get better at making these kinds of connections? Mr. Jobs argued that the best inventors seek out "diverse experiences," collecting lots of dots that they later link together. Instead of developing a narrow specialization, they study, say, calligraphy (as Mr. Jobs famously did) or hang out with friends in different fields. Because they don't know where the answer will come from, they are willing to look for the answer everywhere.
Recent research confirms Mr. Jobs's wisdom. The sociologist Martin Ruef, for instance, analyzed the social and business relationships of 766 graduates of the Stanford Business School, all of whom had gone on to start their own companies. He found that those entrepreneurs with the most diverse friendships scored three times higher on a metric of innovation. Instead of getting stuck in the rut of conformity, they were able to translate their expansive social circle into profitable new concepts.
Many of the most innovative companies encourage their employees to develop these sorts of diverse networks, interacting with colleagues in totally unrelated fields. Google hosts an internal conference called Crazy Search Ideas—a sort of grown-up science fair with hundreds of posters from every conceivable field. At 3M, engineers are typically rotated to a new division every few years. Sometimes, these rotations bring big payoffs, such as when 3M realized that the problem of laptop battery life was really a problem of energy used up too quickly for illuminating the screen. 3M researchers applied their knowledge of see-through adhesives to create an optical film that focuses light outward, producing a screen that was 40% more efficient.
Such solutions are known as "mental restructurings," since the problem is only solved after someone asks a completely new kind of question. What's interesting is that expertise can inhibit such restructurings, making it harder to find the breakthrough. That's why it's important not just to bring new ideas back to your own field, but to actually try to solve problems in other fields—where your status as an outsider, and ability to ask naive questions, can be a tremendous advantage.
This principle is at work daily on InnoCentive, a crowdsourcing website for difficult scientific questions. The structure of the site is simple: Companies post their hardest R&D problems, attaching a monetary reward to each "challenge." The site features problems from hundreds of organization in eight different scientific categories, from agricultural science to mathematics. The challenges on the site are incredibly varied and include everything from a multinational food company looking for a "Reduced Fat Chocolate-Flavored Compound Coating" to an electronics firm trying to design a solar-powered computer.
The most impressive thing about InnoCentive, however, is its effectiveness. In 2007, Karim Lakhani, a professor at the Harvard Business School, began analyzing hundreds of challenges posted on the site. According to Mr. Lakhani's data, nearly 30% of the difficult problems posted on InnoCentive were solved within six months. Sometimes, the problems were solved within days of being posted online. The secret was outsider thinking: The problem solvers on InnoCentive were most effective at the margins of their own fields. Chemists didn't solve chemistry problems; they solved molecular biology problems. And vice versa. While these people were close enough to understand the challenge, they weren't so close that their knowledge held them back, causing them to run into the same stumbling blocks that held back their more expert peers.
It's this ability to attack problems as a beginner, to let go of all preconceptions and fear of failure, that's the key to creativity.
The composer Bruce Adolphe first met Yo-Yo Ma at the Juilliard School in New York City in 1970. Mr. Ma was just 15 years old at the time (though he'd already played for J.F.K. at the White House). Mr. Adolphe had just written his first cello piece. "Unfortunately, I had no idea what I was doing," Mr. Adolphe remembers. "I'd never written for the instrument before."
Mr. Adolphe had shown a draft of his composition to a Juilliard instructor, who informed him that the piece featured a chord that was impossible to play. Before Mr. Adolphe could correct the music, however, Mr. Ma decided to rehearse the composition in his dorm room. "Yo-Yo played through my piece, sight-reading the whole thing," Mr. Adolphe says. "And when that impossible chord came, he somehow found a way to play it."
Mr. Adolphe told Mr. Ma what the professor had said and asked how he had managed to play the impossible chord. They went through the piece again, and when Mr. Ma came to the impossible chord, Mr. Adolphe yelled "Stop!" They looked at Mr. Ma's left hand—it was contorted on the fingerboard, in a position that was nearly impossible to hold. "You're right," said Mr. Ma, "you really can't play that!" Yet, somehow, he did.
When Mr. Ma plays today, he still strives for that state of the beginner. "One needs to constantly remind oneself to play with the abandon of the child who is just learning the cello," Mr. Ma says. "Because why is that kid playing? He is playing for pleasure."
Creativity is a spark. It can be excruciating when we're rubbing two rocks together and getting nothing. And it can be intensely satisfying when the flame catches and a new idea sweeps around the world.
For the first time in human history, it's becoming possible to see how to throw off more sparks and how to make sure that more of them catch fire. And yet, we must also be honest: The creative process will never be easy, no matter how much we learn about it. Our inventions will always be shadowed by uncertainty, by the serendipity of brain cells making a new connection.
Every creative story is different. And yet every creative story is the same: There was nothing, now there is something. It's almost like magic.
—Adapted from "Imagine: How Creativity Works" by Jonah Lehrer, to be published by Houghton Mifflin Harcourt on March 19. Copyright © 2012 by Jonah Lehrer.
10 Quick Creativity Hacks
1. Color Me Blue
A 2009 study found that subjects solved twice as many insight puzzles when surrounded by the color blue, since it leads to more relaxed and associative thinking. Red, on other hand, makes people more alert and aware, so it is a better backdrop for solving analytic problems.
2. Get Groggy
According to a study published last month, people at their least alert time of day—think of a night person early in the morning—performed far better on various creative puzzles, sometimes improving their success rate by 50%. Grogginess has creative perks.
Description: CREATIVEJmp1
#3 Don't Be Afraid to Daydream
3. Daydream Away
Research led by Jonathan Schooler at the University of California, Santa Barbara, has found that people who daydream more score higher on various tests of creativity.
4. Think Like A Child
When subjects are told to imagine themselves as 7-year-olds, they score significantly higher on tests of divergent thinking, such as trying to invent alternative uses for an old car tire.
5. Laugh It Up
Description: CREATIVEJmp2
When people are exposed to a short video of stand-up comedy, they solve about 20% more insight puzzles.
When people are exposed to a short video of stand-up comedy, they solve about 20% more insight puzzles.
6. Imagine That You Are Far Away
Research conducted at Indiana University found that people were much better at solving insight puzzles when they were told that the puzzles came from Greece or California, and not from a local lab.
7. Keep It Generic
One way to increase problem-solving ability is to change the verbs used to describe the problem. When the verbs are extremely specific, people think in narrow terms. In contrast, the use of more generic verbs—say, "moving" instead of "driving"—can lead to dramatic increases in the number of problems solved.
Description: CREATIVEJmp3
According to a new study, volunteers performed significantly better on a standard test of creativity when they were seated outside a 5-footsquare workspace, perhaps because they internalized the metaphor of thinking outside the box. The lesson? Your cubicle is holding you back.
8. Work Outside the Box
According to new study, volunteers performed significantly better on a standard test of creativity when they were seated outside a 5-foot-square workspace, perhaps because they internalized the metaphor of thinking outside the box. The lesson? Your cubicle is holding you back.
9. See the World
According to research led by Adam Galinsky, students who have lived abroad were much more likely to solve a classic insight puzzle. Their experience of another culture endowed them with a valuable open-mindedness. This effect also applies to professionals: Fashion-house directors who have lived in many countries produce clothing that their peers rate as far more creative.
10. Move to a Metropolis
Physicists at the Santa Fe Institute have found that moving from a small city to one that is twice as large leads inventors to produce, on average, about 15% more patents.
—Jonah Lehrer
A version of this article appeared Mar. 10, 2012, on page C1 in some U.S. editions of The Wall Street Journal, with the headline: How to Be CreativeHow To Be Creative.