Tuesday, August 13, 2013

Exercise: 30 min. or 30 sec.?

BBC science site has a provocative story on exercise.  The gist:
=======================
According to Dr Stuart Gray from the University of Aberdeen's musculoskeletal research programme, a key factor in reducing the likelihood of early death from cardiovascular disease could be high intensity exercise.

"The benefits do seem to be quite dramatic," he says.

He admits, though, that many in the medical establishment are still promoting moderate intensity exercise.

Gray's study has shown that short bursts of activity, such as sprinting or pedalling all-out on an exercise bike for as little as 30 seconds, result in the body getting rid of fat in the blood faster than exercising at moderate intensity, such as taking a brisk walk.

And getting rid of fat in the blood is important as it reduces the chances of suffering a heart attack.
========================
Whole thing: http://www.bbc.co.uk/news/magazine-21160526
# # #

Autopilot engaged

Nervous about flying?  Try to avoid planes with human pilots!  Here is a snippet from the BBC website's technology area.
--ww--

Aircraft manufacturer Airbus recently released its view of the future of aviation towards 2050 and beyond, and one of the things it stressed was the benefit of planes that can fly themselves. In an extreme proposal, it suggests passenger planes might fly together in flocks, which can result in huge energy savings. They would keep in sync by constantly monitoring their position relative to one another.
While everyone seems confident that the technical challenges of such visions can be overcome, there is perhaps one more significant hurdle to overcome - persuading the general public that a plane without a pilot is safe.
On that point, Professor Cummings says the data is increasingly in favour of unmanned systems. “About three years ago UAVs became safer than general aviation, meaning that more general aviation planes are crashing than UAVs, per 100,000 flight hours,” she says. “So UAVs are actually safer than a weekend pilot, flying a small plane.”
That may not be a huge surprise. But what is perhaps more telling is that last year UAVs became safer than highly trained military fighters and bombers. “I knew that was coming, and it’s one of the reasons I jumped into this field and left commercial piloting and military piloting behind,” says Prof Cummings.
# # #

Is America anti-intellectual?

I am attaching a long book review, done by an editor of the LA Review of Books, of a book that addresses the question of America's anti-intellectualism.  The reviewer gives a below-average grade to the book, partly because he seems to know more about the topic than the author.
It's a nice read, especially for anyone who grew up while Einstein was a rockstar.  The bottom line is that America both praises and belittles intellectuals.  These days, they seem to target humanities intellectuals for scorn, and sci/tech intellectuals for praise.  The Occupy movement is singled out as evidence that liberals can be especially anti-intellectual.
--ww--


======================

Is America anti-intellectual? The jury is still out. One could make equally plausible cases for our country as a hotbed of hostility to organized intelligence and as a sort of paradise for the cleverest, a place that elevates intellectual sophistication (especially when it has economic or technological applications) above basic moral decency. We oscillate wildly between demonizing our intellectuals and deifying them; they appear to us, in turn, as nuisances, threats, and saviors. We cut their funding and then study how their brains function. We trust them with our economy, our climate, our media and our institutions, then rage against them for their failures—and then trust them all over again.
Of course, much depends on what is meant by “intellectual.” The term, which originated in France and entered the English language around the time of the Dreyfus Affair, is notoriously vague and unstable. Though in its most neutral sense it describes only a tendency toward speculative thought, it is very quickly made into a social category with determinate characteristics. Put a novelist, a philosopher, a physicist, a political analyst and a computer programmer in a room together and they’re likely to discover as many differences as similarities—provided they can understand each other at all—but all can be identified (for ease of condemnation, if nothing else) under this single heading. The very notion of “the intellectual” depends on the idea that, over and above what particular people happen to know or do, there is a social category that we can enshrine, or hold culpable. Like the notion of “the aristocrat” that it has in some ways replaced, the intellectual provides a screen on which to project aspiration and hatred, idealism and cynicism.
In Inventing the Egghead: The Battle over Brainpower in American Culture, the cultural historian Aaron Lecklider explores how “Americans who were not part of the traditional intellectual class negotiated the complicated politics of intelligence within an accelerating mass culture.” Here he follows the Italian Marxist critic Antonio Gramsci, who theorized a distinction between “traditional” (bourgeois) and “organic” (working-class) intellectuals, as well as in the vein of British cultural historians Richard Hoggart (The Uses of Literacy) and Raymond Williams (The Long Revolution). Against the received idea, which he associates with historians like Richard Hofstadter and Christopher Lasch, that “the masses” have always been reflexively anti-intellectual, Lecklider argues that, throughout the twentieth century, intelligence—or “brainpower,” as he prefers to say—was in fact highly valued by working-class people in a variety of contexts.
Lecklider’s agenda is a broadly populist one: he wants to defend the American working class from allegations of anti-intellectualism, and to redescribe what might look like hostility or resentment as a form of fascinated engagement. His story begins around the turn of the century, with what he calls a “mainstreaming of intelligence”: a mass movement, mediated by popular culture, to devalue traditional intellectuals and promote organic ones. “Cultural texts consumed by millions of ordinary men and women between 1900 and 1960 suggested all Americans were intellectually gifted,” Lecklider writes, “while deflating the presumptuous grandstanding of the traditional intellectual elite.” (In this, organic intellectuals have not operated so differently from traditional intellectuals, who have never been above undermining the stature of others in order to build up their own.)
In the book’s compelling first chapter, Lecklider discusses the attempts of Coney Island theme parks like Dreamland and Luna Park to bolster their educational aspects. (Dreamland, for instance, hosted the experimental baby incubators of Dr. Martin A. Couney in 1903, and in 1909 Luna Park was “disrobed of its sugar coating” and became the Luna Park Institute of Science.) At around the same time, the Chautaqua circuit, a 1904 extension of the popular adult-education gatherings held as far back as 1874 in upstate New York, provided a sort of fin-de-siècle equivalent to today’s TED Talks. Chautaqua lectures, described by Theodore Roosevelt as “the most American thing in America,” were held on topics of political and scientific interest, with speakers ranging from William Jennings Bryan to Mascot the educated horse. “Audiences for the Chautauquas were comprised of women and men without any particular standing as intellectuals or claims to expertise,” Lecklider writes, “and though they were occasionally glossed with the narrative sheen of social uplift, the assembly programs were designed to offer education to undistinguished audiences and to imbue mass culture with a gleam of shimmering smartness.” At the same time, brainpower was being valorized and mobilized by working-class labor organizers, and by African-American leaders involved in the 1920s “New Negro” movement.
Yet most representations of traditional intellectuals consumed by working-class people were dismissive or derisive. Much of Inventing the Egghead deals with images of scientists, college students, professors, and other conspicuously intellectual types gleaned from the popular culture of the first half of the twentieth century. Lecklider finds that before 1920, when “less than 5 percent of eighteen- to twenty-four-year-olds were enrolled in college … college students and college life were largely maligned within popular culture representations.” Tin Pan Alley tunes like “There’s a Lot of Things You Never Learn at School” (1902) and vaudeville skits like The Freshman and the Sophomore (1907) routinely mocked the uselessness of academic knowledge, even suggesting that the attainment of higher education would be likely to depress one’s earning power. “The brainpower of the working class was being built up even as the value of education was being knocked down,” Lecklider writes, “and a critical tool for evening social inequalities was disparaged as frivolous but also accessible”:
Education’s value was reduced to conspicuous superficiality, and it took the intelligence of the working class to uncover this terrible ruse. At the same time, seeing through the mystique of education’s allure made brainpower into a common stock, a step that had the effect of rendering education accessible and desirable.
Lecklider realizes that there’s something self-defeating about this dynamic, in which brainpower is prized while education is despised: “Representations of college students as privileged, decadent, unambitious, intellectually challenged, and frivolous,” he writes, “thus walked a fine line between deriding the unmistakable class privilege implicit in the nation’s unequal educational system and praising the real social benefits of higher education that had the potential to redress precisely this inequality.” As long as the project of higher education was seen as nothing but a “terrible ruse,” campaigns for a more equitable distribution of intellectual resources were virtually dead in the water.
While intellectuals were often satirized as sexless or antisocial, there was also a strong strain of class envy in popular denunciations of higher education, often expressed as sexual jealousy. College professors and students alike were assumed to be priapic hedonists, and were presented as such in popular songs like “Watch the Professor” — a sexualization of the educational experience which proceeds apace today, the horny teacher still being a master trope of modern porn. Such songs, Lecklider suggests, served a dual purpose: they provided an imaginary resolution of class conflict along the axis of male solidarity (all men, no matter how highly educated, are really only interested in one thing), as well as giving expression to a sense of embittered social frustration (those college guys get all the girls).
This ambiguity as to whether intellectuals were impotent wimps or powerful predators persists in representations of famous intellectuals of the time, as Lecklider shows in his fascinating discussion of the figure of Albert Einstein, whose theory of relativity was an improbable cause célèbre in 1920s America. Einstein, Lecklider points out, “was rarely depicted alongside other scientists. Instead he was represented accompanying political figures, ordinary women and men, or, most often, alone.” He was commonly depicted as an American immigrant, and as a Swiss Jew rather than a German, and he “was made to seem somehow feminine in the manly world of physics,” his femininity somehow defusing popular anxiety about the potential threat posed by his scientific brilliance. Einstein’s disheveled hairstyle and cultured tastes got him tagged immediately as a “long-hair,” a term that, in the twenties, connoted both effeminacy and excessive sophistication.
The most provocative thing about Einstein, though, was neither his foreignness nor his femininity but the inscrutability of his theory. “The urgent desire among nonspecialized audiences to understand relativity was in part driven by a populist impulse toward obscuring the division between the educated elite and the common masses,” Lecklider writes. “Particularly troubling was the fact that Einstein’s theory of relativity openly defied common sense; it implied that ordinary people’s observations were always wrong. … Ordinary Americans were instructed that they could not believe their own eyes.” Newspaper accounts of Einstein’s visits to America tried to assuage this insecurity by emphasizing that elite scientists, too, were “agog” at the theory of relativity.
Ultimately, it was the progress of science, of course — and not the pure theoretical science of relativity, but practical applied science, especially when those applications were to business or industry — that changed ordinary Americans’ opinions on the value of intellectuals. (Lecklider is not very attentive to distinctions between humanities and the sciences; I will return to this in a moment.) “Under scientific management,” Lecklider observes, “the cultural value of brainpower assumed a distinctly capitalist hue.” In 1915, the Australian physicist T. Brailsford Robertson published an article in Scientific Monthly entitled “The Cash Value of Scientific Research.” Around the same time, Frederick Winslow Taylor’s principles of “scientific management” began to decisively influence the world of industry. While this incursion of science into the working day did much to encourage working-class anti-intellectualism—science being associated with management, and thus viewed with suspicion—it also consolidated the link, in the public imagination, between intellect and power.
Later still, in the 1930s, the country’s dire economic circumstances forced a revaluation of expert intelligence, as the same mental forces that had driven and directed capitalism were prevailed upon to deal with its collapse. “Brainpower was understood to be an important tool for emerging from the Depression,” Lecklider states. For the moment, at least, ordinary Americans’ terror of disaster trumped their hatred of elitism:
Though fears persisted concerning the potential for concentrated brainpower to create divisions between smart and ordinary Americans or to set up an intellectual class that disavowed the putatively populist foundation for American democratic praxis, this potential conflict was increasingly depicted as a small price to pay for emerging from the Depression as quickly as possible.
Thus one unlikely result of “the Red Decade,” according to Lecklider, was that formerly suspicious proletarians learned to stop worrying and love elite intellectuals, or at least to trust them for the time being. The justification for brainpower shifted again in the 1940s, when it was “reframed as an essential weapon for victory in World War II,” the lawyers, economists, and political scientists of Roosevelt’s brain trust being replaced in the public’s affections by nuclear physicists like J. Robert Oppenheimer and his colleagues on the Manhattan Project (to which Lecklider devotes an entire chapter). At the same time that the masses were getting behind the elites, the elites began to see, with renewed clarity, the point of educating the masses, since “[i]ntelligent Americans were less likely to be seduced by fascism, and brainpower was positioned as critical to a broader antifascist project.” This logic carried over to the 1950s as well, with communism taking the place of fascism.
Once brainpower was reconceived as a basic building block of American dominance, the terms by which it was appraised as a social characteristic inevitably changed. Lecklider registers this change on a lexical level, by chronicling the rise of the curious word “egghead” in the postwar US vernacular. “Before the 1950s, ‘egghead’ was an innocuous term referring to nothing more troubling than a person who possessed a bald, oval head.” Its invidious modern use dates to approximately 1952, when the right-wing columnist Stewart Alsop used it to describe the supporters of Presidential candidate Adlai Stevenson. (Stevenson took the teasing in stride, playing on the continuity of anti-intellectualism and anti-communism by quipping: “Eggheads of the world, unite! You have nothing to lose but your yolks!”) The term took off quickly in popular culture: among the examples discussed are Frank Fenton’s 1954 science fiction story “The Chicken or the Egghead,” pop singer Jill Corey’s 1956 single “Egghead,” a 1957 Saturday Evening Post article about Princeton, New Jersey entitled “I Live Among the Eggheads,” and Molly Thatcher Kazan (wife of Elia)’s 1957 play The Egghead. (Somehow Lecklider omits the Marvel Comics supervillain Egghead, who first appeared in a 1962 issue of Tales to Astonish.)
The transition from “long-hair” to “egghead” as a favored term of anti-intellectual abuse deserves some scrutiny. The two terms aren’t quite congruent; a decent 21st century equivalent of the former might be “hipster,” with its implication of pretense and smug self-superiority, while an equivalent to the second would be “nerd” or “geek,” connoting haplessness and myopic concentration on a single area of expertise. Both are terms of derision, but the derision is of a qualitatively different kind. As Lecklider notes, the feminine qualities of the long-hair are stripped away from the egghead almost entirely: “The egghead’s smoothness was made phallic, engendering the egghead as male. The egghead’s delicate fragility and curvy roundness became feminine features, though not explicitly female. The oval shape of the egg translated into male baldness.” More to the point, a deliberate sartorial choice (wearing one’s hair long) gave way to a biological property (the shape of one’s cranium), thus stripping away any sense of agency that once attached to the intellectual life. Eggheads, like nerds, were helplessly intellectual; they didn’t choose to be the way they are, but simply made the best of it.
Most importantly—and Lecklider fails to make as much of this as he might—the passage from “long-hair” to “egghead” indexed a postwar shift from the humanities to science as the master discourse of American intellectualism (and thus, logically, anti-intellectualism as well). Long-hairs lounged around and loved classical music, fine art and literature; eggheads, meanwhile, toiled away at fearsomely difficult calculations and solved fundamental mysteries of the universe. While both were ridiculous, the former were more contemptible, because their sophistication benefited only themselves, whereas the latter produced something for the common good, even if they appeared impenetrable or insufferable in the process.
What the passage from “long-hair” to “egghead” signified, perhaps, was not a change in the relation between traditional and organic intellectuals, but the division of the traditional intellectual class into “the two cultures,” decried by C.P. Snow in his famous 1959 lecture of that title. And as intellectuals were divided, so, inevitably, were anti-intellectuals. If, in the 1950s, the working class was no longer able to “position [itself] as intelligent … by deflating the intellectual pretensions of others,” it may have been because they had developed strategies to deflate a wholly different kind of intellectual. The cultural long-hair’s terrible ruse could be seen through and derided, while the scientific egghead’s implacable logic could only be accepted or ignored. Ordinary Americans could imagine, and desire, a world without long-hairs and their decadent speculations; but a world without the productive brainpower of eggheads seemed, at that point, to be unthinkable.
Lecklider’s book convinces me that the representational shift in the 1950s from “long-hair” to “egghead” is indeed an important one—but it is one that Lecklider doesn’t quite do justice to, in part because he elides the distinction between scientific and humanist intellectuals. He also perhaps overstates the reactionary quality of postwar anti-intellectualism. “The egghead embodied the paradoxes of brainpower in the Cold War,” he writes, “by exploiting the contradiction between a cultural desire to possess the social and political power associated with white men and a growing anxiety about the influence of the left, homosexuality, and the black civil rights movement in shaping American life.” As the above passage makes clear, Lecklider reads the “egghead” first and foremost as a conservative trope: in his view, “the popular use of the egghead label served chiefly to restore the ideological hegemony of a virulently white, masculine liberalism.” But it’s too convenient to pretend that scorn for eggheads, particularly in the realms of science and politics, has come only from the racist, sexist right. Certainly there’s a strong tradition, which Lecklider does not discuss, of anti-technocratic sentiment on the egalitarian left which can at times verge on anti-intellectualism, exemplified by the writings of sociologists like C. Wright Mills; we see it today, most obviously, in the neo-anarchism of David Graeber, Rebecca Solnit, and others associated with the Occupy movement.
Furthermore, popular culture—the vehicle, Lecklider shows, of so many struggles over brainpower and its representation in the first half of the twentieth century—has today been thoroughly colonized by the egghead, and by “geek culture.” The crowd-pleasing anti-intellectualism of The Freshman and the Sophomore has given way to the cuddly eggheads of The Big Bang Theory and the startup-worship of films like The Internship. Again, these developments have much to do with the rise of the scientific and technological elite and the relative decline of the humanist one. If the paradigmatic intellectual of the 20s was the artist and of the 50s the scientist, today it’s the tech CEO. (It seems worth noting that, in our own time, there has been little to no populist resentment of Silicon Valley or the tech industry.)
All of these developments are outside of Lecklider’s remit; his interest is in a particular historical period, and yet it’s worth pondering the relevance of his narrative to our own moment, especially since Inventing the Egghead ends with something like a call to arms. “Reclaiming the history of an organic intellectual tradition in American culture represents a starting point for envisioning intelligence as a shared commodity across social classes,” Lecklider writes; “wrested from the hands of the intellectuals, there’s no telling what the brainpower of the people has the potential to accomplish.” The people, yes; but in our time, hasn’t Glenn Beck, say, wrested brainpower from the hands of the intellectuals? Lecklider begins his book with early 21st century debates over the intelligence of George W. Bush, but he makes no mention of the Tea Party or modern libertarianism—arguably the closest thing the US has to a populist movement in thought. Criticizing Hofstadter, Lasch, and Lewis Coser, Lecklider writes that “1960s historians prescribed a healthy dose of intellectuals to resolve the social problems of Cold War-era disorder.” Yet he himself seems to prescribe a healthy dose of anti-intellectuals to resolve our own problems, a solution that seems every bit as naïve, and perhaps more dangerous.
Inventing the Egghead is a provocative, intelligent, but ultimately unsatisfying book. Lecklider’s intention to recover and valorize a history of twentieth-century working-class intellectualism keeps him from asking more fundamental questions about the role of the intellectual in American society. Furthermore, his use of “popular culture” as a site of reflection of working-class belief and desire is highly problematic; he never once mentions the fact that many of the songs, books, and films he discusses were composed or crafted by middle-class artists, or that they were consumed and enjoyed by people of different classes. (Even traditional intellectuals have been known to indulge now and then.) If there’s a lesson in the archive that Lecklider so usefully assembles for us in Inventing the Egghead, it’s that “the intellectual” is a joint creation of the haves and have-nots, the clergy and the laymen, and a site of struggle as well as collaboration and agreement. We shouldn’t speak, then, of wresting anything from anyone’s hands, but of thinking more about what we hold in common.
 
Evan Kindley is an editor for the LA Review of Books. View all posts by Evan Kindley → This entry was posted in Culture, Reviews and tagged , , , . Bookmark the permalink. Tags: , , ,

What scientists do and why we need them

Here is the text of a delightful review of two books on science history.  The reviewer commands us to read the books so we learn "what scientists do and why we need them", but you can get the gist of that from the review itself.  As you know, the War on Curiosity is still raging, so I added emphasis to a sentence about that.
--ww--

Curiosity: How Science Became Interested in Everything, By Philip Ball, University of Chicago Press, 465 pp., $35
Brilliant Blunders: From Darwin to Einstein—Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe, By Mario Livio, Simon & Schuster, 341 pp., $26
Aristotle called it aimless and witless. St. Augustine condemned it as a disease. The ancient Greeks blamed it for Pandora’s unleashing destruction on the world. And one early Christian leader even pinned the fall of Lucifer himself on idle, intemperate, unrestrained curiosity.
Today, the exploration of new places and new ideas seems self-evidently a good thing. For much of human history, though, priests, politicians, and philosophers cast a suspicious eye on curious folks. It wasn’t just that staring at rainbows all day or pulling apart insects’ wings seemed weird, even childish. It also represented a colossal waste of time, which could be better spent building the economy or reading the Bible. Philip Ball explains in his thought-provoking new book, Curiosity, that only in the 1600s did society start to sanction (or at least tolerate) the pursuit of idle interests. And as much as any other factor, Ball argues, that shift led to the rise of modern science.
We normally think about the early opposition to science as simple religious bias. But “natural philosophy” (as science was then known) also faced serious philosophical objections, especially about the trustworthiness of the knowledge obtained. For instance, Galileo used a telescope to discover both the craters on our moon and the existence of moons orbiting Jupiter. These discoveries demonstrated, contra the ancient Greeks, that not all heavenly bodies were perfect spheres and that not all of them orbited Earth. Galileo’s conclusions, however, relied on a huge assumption—that his telescope provided a true picture of the heavens. How could he know, his critics protested, that optical instruments didn’t garble or distort as much as they revealed? It’s a valid point.
Another debate revolved around what now seems like an uncontroversial idea: that scientists should perform experiments. The sticking point was that experiments, almost by definition, explore nature under artificial conditions. But if you want to understand nature, shouldn’t the conditions be as natural as possible—free from human interference? Perhaps the results of experiments were no more reliable than testimony extracted from witnesses under torture.
Specific methods aside, critics argued that unregulated curiosity led to an insatiable desire for novelty—not to true knowledge, which required years of immersion in a subject. Today, in an ever-more-distracted world, that argument resonates. In fact, even though many early critics of natural philosophy come off as shrill and small-minded, it’s a testament to Ball that you occasionally find yourself nodding in agreement with people who ended up on the “wrong” side of history.
Ultimately, Curiosity is a Big Ideas book. Although Newton, Galileo, and others play important roles, Ball wants to provide a comprehensive account of early natural philosophy, and that means delving into dozens of other, minor thinkers. In contrast, Mario Livio’s topsy-turvy book, Brilliant Blunders, focuses on Big Names in science history. It’s a telling difference that whereas Ball’s book, like a Russian novel, needs an appendix with a cast of characters, Livio’s characters usually go by one name—Darwin, Kelvin, Pauling, Hoyle, and Einstein.
Livio’s book is topsy-turvy because, rather than repeat the obvious—these were some smart dudes—he examines infamous mistakes they made. He also indulges in some not always convincing armchair psychology to determine how each man’s temperament made him prone to commit the errors he did.
For those of us who, when reading about such thinkers, can’t help but compare our own pitiful intellects with theirs, this focus on mistakes is both encouraging and discouraging. It’s encouraging because their mistakes remind us that they were fallible, full of the same blind spots and foibles we all have. It’s discouraging because, even at their dumbest, these scientists did incredible work. Indeed, Livio argues that their “brilliant blunders” ended up benefiting science overall.
Take Kelvin’s error. During William Thomson Kelvin’s heyday in the later 1800s, various groups of scientists had an enormous row over the age of Earth, in large part because Darwin’s theory of natural selection seemed to require eons upon eons of time. Unfortunately, geologists provided little clarity here: they could date fossils and rock strata only relatively, not absolutely, so their estimates varied wildly. Into this vacuum stepped Kelvin, a mathematical physicist who studied heat. Kelvin knew that Earth had probably been a hot, molten liquid in the past. So if he could determine Earth’s initial temperature, its current temperature, and its rate of cooling, he could calculate its age. His initial estimate was 20 million years.

For various reasons, Kelvin’s answer fell short by two orders of magnitude (the current estimate is 4.5 billion years). Worse, Kelvin used his calculation to bash Darwinism, a vendetta that ended up tarnishing his reputation. Nevertheless, his precisely quantified arguments forced geologists to sharpen their own work in order to rebut him, and eventually they too began to think of Earth as a mechanical system. A nemesis can bring out the best in people, and Kelvin’s mistake proved a net good for science.
Ball’s and Livio’s books help answer an important question: why bother reading science history? Scientists themselves, after all, are notoriously uninterested in the subject, probably for good reason. Science proceeds by discarding unworkable ideas, and every hour spent poring over arcane theories is time not spent refining your own experiments. But as Ball points out, old debates have a way of reemerging in modern guises. For instance, early objections to natural philosophy—the “unnatural” experiments, the prodigal waste of money on expensive toys—echo modern objections to, say, genetically modified food and the Large Hadron Collider.
Similarly, Livio shows how Einstein’s blunder has risen, phoenixlike, in recent years. In forming his theory of general relativity, Einstein added a so-called cosmological constant to one of his field equations: a repulsive force that countered gravity and (somewhat like air pressure) kept the universe from collapsing in upon itself. Einstein later struck the constant out, discarding it as ugly, ad hoc, and unnecessary. But two teams of scientists resurrected it in the 1990s to explain why our universe is expanding faster than we once realized. On cosmic scales, Einstein’s once-discarded constant may be the dominant force in the universe. (See how frustrating this is? Even when he was wrong, Einstein was right!)
Reading science history might not fix the bugs in your equipment or help you secure a new grant, but it can provide a larger perspective on what scientists do and why we need them. Science history doesn’t give all the answers, but it does help explain why we seek the answers in the first place.
# # #

No tipping

Here's a blog post that I found on Quartz (qz.com) that addresses one of my pet peeves.  After admiring the Japanese for regarding tipping as insulting, and after admiring Europeans for building in fixed tip amounts, I have to wonder why America is so backward.
--ww--

Tipping, as a compensation scheme, is great for everyone.
Restaurant customers like tipping because it puts them in the driver’s seat. As a diner, you control your experience, using the power of your tip to make sure your server works hard for you.
Restaurant servers like tipping because it means their talent is rewarded. As a great server, you get paid more than your peers, because you are a better worker.
Restaurant owners like tipping because it means they don’t have to pay for managers to closely supervise their servers. With customers using tips to enforce good service, owners can be confident that servers will do their best work.
There’s only one problem: none of this is actually true. I know because I ran the experiment myself.
For over eight years, I was the owner and operator of San Diego’s farm-to-table restaurant The Linkery, until we closed it this summer to move to San Francisco. At first, we ran the Linkery like every other restaurant in America, letting tips provide compensation and motivation for our team. In our second year, however, we tired of the tip system, and we eliminated tipping from our restaurant. We instead applied a straight 18% service charge to all dining-in checks, and refused to accept any further payment. We became the first and, for years, the only table-service restaurant in America where you couldn’t pay more money than the amount we charged you.
You can guess what happened. Our service improved, our revenue went up, and both our business and our employees made more money. Here’s why:
  • Researchers have found (pdf) that customers don’t actually vary their tips much according to service. Instead they tip mostly the same every time, according to their personal habits.
  • Tipped servers, in turn, learn that service quality isn’t particularly important to their revenue. Instead they are rewarded for maximizing the number of guests they serve, even though that degrades service quality.
  • Furthermore, servers in tipping environments learn to profile guests (pdf), and attend mainly to those who fit the stereotypes of good tippers. This may increase the server’s earnings, while creating negative experiences for the many restaurant customers who are women, ethnic minorities, elderly or from foreign countries.
  • On the occasions when a server is punished for poor service by a customer withholding a standard tip, the server can keep that information to himself. While the customer thinks she is sending a message, that message never makes it to a manager, and the problem is never addressed.
You can see that tipping promotes and facilitates bad service. It gives servers the choice between doing their best work and making the most money. While most servers choose to do their best work, making them choose one or the other is bad business.
By removing tipping from the Linkery, we aligned ourselves with every other business model in America. Servers and management could work together toward one goal: giving all of our guests the best possible experience. When we did it well, we all made more money. As you can imagine, it was easy for us to find people who wanted to work in this environment, with clear goals and rewards for succeeding as a team.
Maybe it wouldn’t work in every restaurant, in every city. Maybe the fact that it worked so well for us was due to some unique set of circumstances. Then again, other service industries like health care and law aren’t exactly lining up to adopt tips as their primary method of compensation. So maybe we’re all just being suckered into believing tipping works.
It’s something you can think about, at least, the next time you’re waiting on a refill of iced tea.
# # #