Archive for Book Review
Jason Anthony knows about the cuisine in Antarctica. He spent eight seasons on the southern continent in the U.S Antarctica Program. In Hoosh: Roast Penguin, Scurvy Day, and Other Stories of Antarctic Cuisine, he uses this knowledge as a way of examining the culture and history of food in Antarctic exploration. It is a new approach to a topic that often considers heroism, flag planting, and sledging distances but not the stuff that most occupied the thoughts of polar explorers: food. Look for Hoosh in November when it comes out with University of Nebraska Press.
Excerpt from Hoosh, Chapter 4, “Meat and Melted Snow,” with the permission of Jason Anthony and University of Nebraska Press:
Early travel into interior Antarctica required a level of planning and austerity unsurpassed in exploration until humans ventured into space. When the men of the heroic age left their coastal huts to explore the hinterlands, they left behind the last vestige of ordinary life. No stockpiled crates of food, no fresh meat squawking outside the door; the continent offered only cold air to breathe and hard snow to melt. Sustenance was limited to what they could carry, and they could not carry much, because, paradoxically, the less food weight they carried the farther they could go. Up to a point. Robert Scott expressed the dilemma elegantly: “The issue is clear enough: one desires to provide a man each day with just sufficient food to keep up his strength, and not an ounce beyond.”
Success in sledging depended on embracing the austerity with which Antarctica greeted these men. Sledging food had to be complete but also simple, concentrated, dehydrated, compressed, calculated, and packed tightly. Most expeditions brought pemmican, a perfect endurance food used by Native Americans for millennia, and converted it on the trail to “hoosh,” their stew of pemmican and melted snow, usually thickened with crushed biscuit. This was supplemented by a carbohydrate, usually biscuits, and often some modicum of sugar, caffeine, and dairy fat. Units of each were calculated in volume and weight and accumulated only up to the thin line between starvation and distance desired.
Antarctic sledging began with Frederick Cook teaching the craft to Roald Amundsen on the Belgica. On January 30, 1898, expedition leader Adrien de Gerlache, Cook, Amundsen, and two others undertook the first Antarctic sledge journey, climbing a short distance to a peak on Brabant Island and camping for seven days. Difficult weather confined them to a “sybaritic life” in the tent, wrote Gerlache, lounging amid “the aromas of cocoa or the good and cheering smell of pea soup.” Then, at the end of winter, Amundsen, Cook, and Lecointe undertook another short sledge journey toward an iceberg from the ice-trapped Belgica. Cook reports that in their quest for fresh meat Amundsen “was solemnly appointed chairman of the order of the penguin.” They tried eating some lousy Swiss soup mix, but threw it away in order to cook up penguin breast in the same pan. That was fine, but when they made hot chocolate in the pan, remnants of the soup and penguin made it “filth” to Lecointe. He had no idea how common such filth would become in Antarctic cuisine.
Despite these pleasant origins, Antarctic sledging was often a profound exercise in risk management. A balance had to be struck between sledging goals and the reality of starvation, between ambition and death. When dogs or ponies pulled the sledge, the burden was shared, while manhauling – when men stepped into harness – was one of the most strenuous human activities ever conceived. It was a sort of voluntary slave labor in the name of patriotism and knowledge, seeking glory in laying claim to the last great blank spot on the map. Their bleak journeys were often fed more with idealism than hoosh.
In the tradition of nineteenth century European expeditions to remote areas of Africa, Asia, and the Arctic, some heroic-age expeditions brought beasts of burden to Antarctica that in hard times became meat for the hoosh. Ponies and dogs hauled and then became fodder. After his desperate experience on the ice, Douglas Mawson reasoned that “in an enterprise where human life is always at stake, it is only fair to put forward the consideration that the dogs represent a reserve of food in cases of extreme emergency.” Few things are as efficient on a life-or-death journey as the ability to eat your transportation.
Mostly this was a British affair. Of eighteen heroic-age expeditions, fifteen brought beasts of burden, but only six (four British, one Australian, one Norwegian) converted them into food. Scott’s Discovery expedition fed dogs to dogs as provisions dwindled. On Shackleton’s Nimrod expedition, Siberian ponies pulled sledges full of supplies for his depots on the Ross Ice Shelf before their worn-out carcasses became supplies in the depots. Shackleton was careful to eat this fresh, heavy meat before the lightweight, concentrated pemmican they could carry farther. Sometimes they chewed raw cubes of it as they walked. “One point which struck us all,” Shackleton wrote later, “was how man’s attitude towards food alters as he goes South. At the beginning, a man might have been something of an epicure, but we found that before he got very far even raw horse-meat tasted very good.” Best of all, he said, was blood from a butchered pony frozen into an icy mass in the snow, which was then boiled to thicken the hoosh. Socks, Shackleton’s last pony, fell into a crevasse just hours before he was due to be shot and cut up for food. Shackleton later posited that “the loss of Socks, which represented so many pounds of meat,” might have cost them the Pole.
In Shackleton’s shadow, Scott returned in the Terra Nova to follow the same path to the Pole, using the same ill-suited ponies as transport. As the first five were shot along the trail, each made four or five meals for the dog teams. Scott reported that his men enjoyed the change too: “Tonight we had a sort of stew fry of pemmican and horseflesh and voted it the best hoosh we had ever had on a sledge journey.”
Sometimes it was extreme necessity that drove the men to eat the burdened beasts. Douglas Mawson and Xavier Mertz resorted to eating their dogs only when all other options had disappeared into a black crevasse. Shackleton’s Endurance tale makes it plain that once the disintegration of Weddell Sea ice forced them into crowded boats, there would be no place for their beloved dogs. Shackleton ordered them shot and eaten. It tasted “just like beef,” he said, “but, of course, very tough.”
The name most associated with eating domesticated animals in Antarctica is Roald Amundsen, whose slaughter of twenty four healthy dogs is infamous. Amundsen’s Butcher’s Shop, like Scott’s Shambles Camp, marked not just the end of these animals’ lives but also the place where carcasses were dressed for consumption. “Great masses of beautiful fresh, red meat, with quantities of the most tempting fat, lay spread over the snow,” wrote Amundsen, a hungry Norwegian epicure for whom the dogs’ corpses recalled “memories of dishes on which the cutlets were elegantly arranged side by side, with paper frills on the bones, and a neat pile of petit pois in the middle.”
The story here is neither of epicurean savagery nor even of human hunger, but of transport management. Amundsen, the most professional and shrewd of polar explorers, created a calculus of weights carried and weights consumed for every day of the roundtrip journey to the Pole. He related this to the pulling power of a dog – how many pounds on the sledge each dog could haul. As food and fuel were consumed, weight on the sledge diminished, and at a certain point that lost weight would equal a dog’s pulling power and the dog became superfluous. Amundsen then related “the average weight of edible flesh of a dog and its food value when eaten by the others. By these calculations,” he wrote, “I was able to lay out a schedule of dates upon which dog after dog would be converted from motive power into food.” In this analysis, some dogs were killed so others lived comfortably on the trail. After twenty four dogs were killed at the Butcher’s Shop and after the Norwegians had been to the Pole, six more were slaughtered, one at a time, so the surviving dogs actually gained weight on the return journey. The open question is how many more dogs might Amundsen have brought home safely had he not stuck to his calculation so firmly.Though Amundsen shows real affection for individual dogs throughout the story, he was a man on a mission. The animals were slaves to his cause, and so completely did Amundsen believe in that cause that he could write casually of their destruction: “I must admit that [the cutlets] would have lost nothing by being a little more tender, but one must not expect too much of a dog.” In fact, Amundsen expected everything of them, and he got it.
We’re free to fly the crimson sky
the sun won’t melt our wings tonight
take me higher
you take me higher
“Even Better Than the Real Thing” U2
In 1996, professor Richard Bartle wrote that explorers “try to find out as much as they can… mapping [the world's] topology.” Bartle was not talking about astronauts or cavers, but gamers. A developer of Multi-User Dungeons (MUDs), Bartle challenged the idea of MUDs as games in the traditional sense of the word. Rather, they were complex social environments that attracted different players for different reasons: to gain points, to socialize with others , to kill opponents, or to explore the game environment.
To call a basement-dwelling, pajama-wearing gamer an explorer might seem absurd. There is difference between exploring virtual worlds and real ones. Still Bartle’s paper raises interesting questions. Is an explorer defined by places traveled, by worldly action? Or is “explorer” an identity, something that exists as a mode of personality? If the latter, does the real world matter at all? If so, how much? What is the role, if any, of simulation within the field of exploration?
This last question may seem better suited for science-fiction literature than sociology. The sci-fi world is populated by virtual travelers: Ender Wiggin of Ender’s Game, Neo of The Matrix, CLU of Tron, Henry Case of Neuromancer, and Jake Sully of Avatar. The list is long.
Yet simulations exist in the “real world” of exploration too. NASA conducts a number of “analog” expeditions: in the desert, in the Arctic, and underwater to provide training and allow trouble-shooting for other missions. Says NASA:
Analogs provide NASA with data about strengths, limitations, and the validity of planned human-robotic exploration operations, and help define ways to combine human and robotic efforts to enhance scientific exploration.
They have other functions too. Anthropologist Valerie Olson points out that analog missions function as justifications for the broader idea of human spaceflight. The analog mission community tend to see these simulations as more “real” than others, authentic human programs rather than robotic expeditions or computer simulations.
The NASA Extreme Environment Mission Operations (NEEMO), for example, takes pride in the danger and scientific rigor of each expedition. Says one NEEMO technician “This is a real and real shit happens.” [Quoted from Olson, "American Extreme: An Ethnography of Astronautical Visions and Ecologies," Ph.D Thesis, Rice University, p. 63]
Yet “mere” computer simulations also contribute to modern exploration in ways that cannot be ignored. X-15 test pilots such as Neil Armstrong (yes that Neil Armstrong) used flight simulators to train, preparing themselves for the difficult conditions of hypersonic travel 65 miles (100 km) about the earth.
In the end, however, simulators could not adequately prepare pilots for the challenges of flying in the upper atmosphere. At lower altitudes, the X-15 behaved like a plane, and pilots relied on wing surfaces to steer through an ocean of air. At higher altitudes the X-15 acted like a rocket, and pilots used reaction thrusters to change direction. Moving from one set of controls to the other at the boundaries of space, however, proved extremely difficult especially when traveling 4000 mph (6500 kph).
Engineers at North American solved this problem by placing one of the X-15 flight simulators (the MH-96) into the X-15 itself. The pilot would, in effect, fly the simulator. The simulator then translated the pilot’s actions to the aircraft. As Steve Mindell writes in his book Digital Apollo:
The MH-96 could cause the “real” X-15 to fly like an “ideal” one, which would make it behave exactly the same under all flight conditions, from the vacuum of space right down to the ground. It would automatically mix the reaction controls and aerodynamic controls, so that the pilot only needed one control stick, whether flying in the atmosphere or in space, or during reentry. [Mindell, 58]
In short, simulators were not just for practice: they were an integral part of the mission itself. As for the test pilots, they were not entirely unlike the pj-clad gamer holed up in the basement. Humans can only survive at the boundaries of space by being protected from space. While the pilot/astronaut is going to places never traveled, she is doing so cocooned within a space suit, cockpit, and environmentally controlled capsule. Says Mindell:
The X-15 was an unusual craft to fly. The pilot could not see the nose, and he could not see the wings. His full pressure suit wrapped him up tight and isolated him from the outside world. He could not feel or touch anything directly other than the suit and gloves. He could smell nothing other than the pure oxygen he was breathing. In [test pilot Milt] Thompson’s words, “I was in my own little world. I was comfortable and secure and protected from harm.” [Mindell, Digital Apollo, 54]
This is the irony of exploration technology in general, and simulation technology in particular: they allow us to go longer, further, and faster even as they prevent us from experiencing such feats directly. They take us higher into universe even as they keep it out of reach.
There is a scholar, call him Mr X, who received his training within the academy, but who found it wasn’t enough. He wanted more: to move outside of his wonky circle of colleagues, to engage the public, to communicate ideas in a manner that was artful as well as illuminating.
While his peers wrote difficult books and debated obscure issues at their meetings, Mr X took part in the communication revolution that was bringing academic ideas into greater contact with the wider world. He wrote shorter pieces for broader audiences, telling one colleague “Publish small works often and you will dominate all of literature.” So when Mr X was offered a position far away from his bustling city home, he took it, feeling that his community was no longer defined by geography but by ideas, communicated through the new social technologies.
The new social technologies wern’t blogs or Web 2.0 applications, but the pamphlet and the salon. Mr X is not Steven Jay Gould or PZ Myers but Pierre-Louis Moreau de Maupertuis, an 18th century French explorer and polymath who led a geodetic expedition to Lapland in 1736.
Maupertuis is usually remembered as the scholar who described the actual shape of the earth by measuring a degree of arc at high latitude. In so doing, he helped settle a dispute with French cartographer Jacques Cassini over whether the earth was prolate (that is, longer along its N-S axis), or oblate (longer along its diameter at the equator). Cassini believed that the earth was prolate like a lemon. Maupertuis, following in the footsteps of Newton, helped prove that it was oblate like a jelly donut.
Yet as Mary Terrall points out in her book The Man Who Flattened The Earth: Maupertuis and the Sciences of the Enlightenment, Maupertuis’s most interesting work takes place back home as he tries to make a name for himself in this new theater of conversation, a world that connects elite academies and educated polite society.
As I read about the radical effects of social technology on academic writing and reputation today, I wonder: how much of this is really new? Perhaps the boundaries between elite institutions and general public have always been squishier than we’ve made them out to be. Blogs and twitter feeds feel so new, so world changing, because they have in fact changed the world we live in, the way we communicate with friends, peers, and random passers-by. Yet it’s bound to feel like this. The flood feels strongest when you’re standing in the middle of the stream. The story of Maupertuis makes me think that it is a seasonal event, a spring flood that returns with some regularity, the latest iteration of social technology (and sociable science writing) that probably dates to the printing press. Vive le café.
The Hero’s Journey
The hero’s journey is a story common to all human cultures. While this story varies from from place to place and era to era, there are deep structural similarities among its forms. So common were these basic structural elements that comparative mythologist Joseph Campbell called the hero story a “monomyth.”
The story has a structure that we recognize in Bible stories and big-screen films alike: a hero departs the comforts of the known world on a quest. She endures physical and emotional trials, gains wisdom, and returns home to impart lessons learned on the journey.
Campbell’s eagerness (following Jung’s) to reduce all stories to basic structures makes me a little uneasy. (Can we really blueprint all human art forms?) But in the case of the hero story, I think he was on to something. The power of the journey story does appear to have almost universal expression and a common lesson also: that we gain knowledge by our encounter with the unknown and its perils.
That doesn’t mean, however, that the monomyth is monolithic. I see two important variants: some heroes gain knowledge in their quest that adds to things they already know (e.g. Moses and Jesus). Others discard their possessions and beliefs in order to find the truth (e.g. Plato’s prisoner of the cave, Siddhārtha Gautama, and St Francis of Assisi).
Since the late 1700s, the latter variant of the hero monomyth — that one must escape civilization in order to find oneself — has gained a strong foothold in the West. Although the idea that civilization corrupts is an old one, it has blossomed with the writings of Jean-Jacques Rousseau, Ralph Waldo Emerson, and others.
In its most extreme form, the escape-civilization-to-find-enlightenment myth suggests that the traveler or explorer gains wisdom only when civilization is burned away by extreme experience. As climber Robert Dunn put it in 1907: explorers were “men with the masks of civilization torn off.”
Or as climber David Breashears expressed it a century later:
The idea is that all the artifice that we carry with us in life, the persona that we project—all that’s stripped away at altitude. Thin air, hypoxia—people are tremendously sleep-deprived on Everest, they’re incredibly exhausted, and they’re hungry and dehydrated. They are in a very altered state. And then at a moment of great vulnerability a storm hits. At that moment you become the person you are. You are no longer capable of mustering all this artifice. The way I characterize it, you either offer help or you cry for help.
But if the journey does its wisdom-building work by tearing off the mask of civilization, by stripping away artifice, we are left with this question:
What’s underneath the mask?
Dunn and Breashears imply that the true self is revealed: the intense experiences of the journey shear the subject of culture and its trappings. This is a comforting idea at first glance because it presumes that
1) you can find yourself by setting out on an exceptionally difficult adventure.
2) your problems are the result of your culture rather than your essential nature.
This reminds me a lot of John Locke who also believed that you could neatly separate the original self from one imprinted by civilization. In An Essay Concerning Human Understanding (1690), Locke argued that human beings begin their journey tabula rasa — as blank slates — waiting to be shaped by experience. The Lockian newborn was a human TiVo pulled from its styrofoam packing, waiting to be filled by sounds and images that would give it its special identity.
I doubt that Dunn or Breashears believe the journey can return the explorer to the perfect self of the infant. I expect they see the perilous journey as a way of re-booting the TiVo rather than wiping it clean, clear out old programming to make space for new material.
The Cult of New Experience
The important point here is this: those who think of the self as something that can be purged of culture, like a psychological master cleanse, tend to weight the power of new experience over the power of reason or ideas, to prefer the bungee jump over the writer’s retreat. In their view, traditional ideas impede our understanding rather than advance it. To access the new, we need to leave our old selves — like a pair of flip-flops — at the door.
Perhaps this obsession with the power of experience explains why so many travelers and explorer seem concerned with having “authentic” experience rather than ones they see as packaged, hybrid, or touristy. In the traveler’s search for the truly different, she must avoid experiences that carry the whiff of world left behind. She avoids the McDonalds in Karachi. She turns down the tour bus to the pyramids. She resists the urge to text-message home from the summit of Everest.
But is our faith in the uber-experience wise? Can we peel away our culture like the rind off an orange? Closer inspection shows how much culture enters the flesh, shapes us, makes us. Humans have an innate form, of course, but its a form that cannot function without an environment. So speaking about one without the other is like asking “Which do plants need more: water or light?”
Before we make pure experience the holy grail of the self-knowledge, then, we need to pay closer attention to the way humans think about these experiences.
First, authentic is a rather squishy concept. Cultures routinely borrow and import what they need from other cultures. For example, in Eat Pray Love, Elizabeth Gilbert discovers herself in part through her ecstatic encounter with Italy. Italy’s authenticity is expressed through its foods: it is a place of fried zucchini blossoms and sizzling Margarita pizza. Yet the core ingredients of these foods — zucchini and tomatoes — are foreign to Italy. They are both New World species, brought back to Europe and incorporated into Italian cooking in the 19th century. What is authentically Italian experience for Gilbert was, two centuries earlier, suspiciously foreign and non-Italian.
Second, experience itself is never pure, never unmediated (as I wrote about in my recent post about Moscow). Even those experiences which seen so expressly sensory — the joy of food, sex, art — feel different according to our beliefs about them. In his book, How Pleasure Works: The New Science of Why We Like What We Like, Paul Bloom explodes the myth of pure experience:
What matters most is not the world as it appears according to our senses. Rather, the enjoyment we get from something derives from what we think that thing is. This is true of intellectual pleasures, such as the appreciation of paintings and stories, and also for pleasures that seem simpler, such as the satisfaction of hunger and lust. For a painting, it matters who the artist was; for a story whether it is truth or fiction; for a steak we care about what sort of animal it came from; for sex, we care about who we think our sexual partner really is [xii]
Can we apply Bloom’s analysis of pleasure to the explorer’s experience of pain? Does the ascent up Everest gain meaning because of the pure experience of frostbite and hypoxia? Or does it matter more that the climber is enduring such pain on the slopes of the world’s tallest mountain? The mask of civilization is not something that the climber rips away. It’s the reason the climber is there in the first place.
For Europeans in the 1450s, the Western Ocean (or the Atlantic as we now call it) was a frightening place. Unlike the cozy, well-mapped Mediterranean which was surrounded by three continents, the Western Ocean was unbounded, poorly understood, and filled with dangers.
The dangers were not the threat of sea monsters or falling off the edge of the world. Medieval sailors and geographers understood that the earth was spherical. (The idea that they thought it was flat is a fantasy conjured up by Washington Irving in his 1828 biography of Christopher Columbus.)
Rather, the real threat was the ocean itself. Expeditions that followed the West African coast had revealed strong winds and currents that made travel south (with the current) easy, but return extremely difficult, especially with vessels that could not tack close to the wind. By the 1430s, Europeans had even identified a spot on the West African coast, Cape Bojador, as the point of no return.
And yet Europeans, led by the Portuguese, continued to push further south despite this risk. They developed trade factories off the west coast of Africa which exchanged Europeans goods — horses, wool, iron — for gold, ivory, and slaves. And ultimately they followed the African coast around the Cape of Good Hope and into the Indian Ocean, reaching the Indies — the holy grail of luxury items — in 1498.
All of this makes European exploration seem logical and methodical, driven by the promise of riches. Yet Europeans were interested in more than slaves and spices. Africa attracted Europe’s attention because it was considered the most likely location of Prester John, legendary Christian king and potential ally in the fight against the Muslims who occupied the Holy Land.
Historians have long placed Prester John within the category of myth, and in so far as myths describe “traditional stories, usually concerning heroes or events, with or without a determinable basis of fact” I suppose Prester John qualifies.
But “myth” has subtler, darker meanings. The world is filled with traditional stories that have a tenuous relationship to observable facts: the Gospels, the Koran, and the Torah are filled with them. Yet we describe these stories as “beliefs” out of faith or respect. We usually reserve the word “myth,” however, for those stories — unicorns, leprechauns, a living Elvis — that we dismiss as untrue.
The point here is not to say that Prester John was real, but to say that in characterizing him as a mythic figure, historians have tended to discount his serious influence on European exploration and discovery.
This is a central argument of historian Michael Brooks in his excellent thesis, Prester John: A Reexamination and Compendium of the Mythical Figure Who Helped Spark European Expansion. Brooks shows that, while it might be clear in hindsight that Prester John was more fable than reality, it was not clear to Europeans in the 15th and 16th centuries, all of whom could point to multiple accounts of the Christian king from different, trustworthy sources. The Travels of Sir John Mandeville, one of the most popular books in late medieval Europe, even offers a first-hand account of Prester John’s palace:
He dwelleth commonly in the city of Susa. And there is his principal palace, that is so rich and so noble, that no man will trow it by estimation, but he had seen it. And above the chief tower of the palace be two round pommels of gold, and in everych of them be two carbuncles great and large, that shine full bright upon the night. And the principal gates of his palace be of precious stone that men clepe sardonyx, and the border and the bars be of ivory. [Mandeville quoted in Brooks, 87]
On the basis of these multiple, mutually supportive documents, Dom Henrique (Henry the Navigator) charged his explorers to bring back intelligence about the Indies and of the land of Prester John. This was not merely an addendum to their orders for geographical discovery. Argues Brooks:
Without the lure of making political connections with the supposed co-religionist Prester John in the struggle against the Islamic world, the European history of overseas expansions would likely have taken a different course .
This serious, sustained interest in Prester John helps explain the longevity of the legend well into the seventeenth century. I could not help seeing many similarities in Brooks’ account of Prester John with other stories of exploration. The one I have written the most about, the theory of the open polar sea, has also been discounted by historians as “myth” even though it was taken very seriously by scientists, explorers, and geographers in the nineteenth century, shaping the missions of numerous explorers.
Brooks’ thesis is available in pdf here.
He also posts a number of articles and reviews on history and exploration on his blog, historymike.
Every year, history conferences feature panels about biography. These are not talks which offer a biography in the manner of A & E’s Biography Channel (which profiles Kirstie Alley tonight) but ones that consider biography as a genre. They come with titles like “Making a Case for Biography: New Methods in the History of X.”
Why do professional historians feel the need to defend biographies? They have never stopped writing them. Academic presses remain eager to publish them. Yet biography still carries a reputation of being popular at the expense of being deep, of being an intellectual lightweight in a world of hipper, higher-powered genres: micro-histories, cultural histories, comparative histories, and transnational histories. In the high-cultural universe of academic writing, biography is Gilligan’s Island.
There are many reasons for this, but one key reason is structure. Biography focuses on the role of individuals in shaping events. As such, it sails against the wind of modern scholarship which, for forty years or so, has located historical change in institutions, corporations, governments, and national cultures. And individuals? Go to Barnes and Nobles.
Moreover, biography is tricky as a genre because it sometimes lures historians into thinking that they are really psychoanalysts, that they can interpret the thoughts and feelings of their subjects. If Freud couldn’t ferret out the real causes of his patients’ behavior, why do we think biographers will prove any better at it with people who are dead?
That said, I like biographies. If the genre has limitations, it also has spirit. Whether or not people offer a useful way to look at historical change, they are interesting to read about. And the way biographers choose to tell the stories of individuals is interesting too.
For example, Ed Gray’s biography of John Ledyard (which I reviewed here) gives a rich account of Ledyard’s travels. Yet Gray avoids the temptation to put him on the couch and Ledyard remains a mysterious figure, a shadow in the foreground of a brightly painted world.
By contrast, Tim Jeal is far freer with his psychological analysis of Henry Morton Stanley. This should probably make me uneasy. But Jeal builds his psychological hypothesizing on a solid foundation of evidence. He has done his homework on Stanley, a man who left an Africa-sized archive of primary source material.
Better yet, Jeal uses his analysis of Stanley to say interesting things. For example, he observes that Stanley inflated the number of Africans that he killed from the island of Bumbireh in Lake Victoria, a strange boast given that it contributed to Stanley’s reputation as a cold-hearted killer.
Yet Jeal argues that Stanley’s actions make sense only if one understands his shame at being humiliated by the leader of Bumbireh weeks earlier, something that Stanley — abandoned by his parents and raised in a workhouse — was keenly sensitive to. Moreover, Jeal argues that Stanley misjudged his audience’s reaction to the Bumbireh story, thinking that Europeans and Americans would like stories of warfare in Africa, much as they liked “big kill” stories about the Indian wars of the American West.
Despite their very different styles, I recommend both books.
JAMES DELBOURGO and NICHOLAS DEW (eds.), Science and Empire in the Atlantic World. New York: Routledge, 2008. Pp. xiv + 365. ISBN 978-0-415-96127-1. £18.99 (paperback).
Maybe I shouldn’t read too much into titles, but Science and Empire in the Atlantic World caught my attention. At first glance, it seemed a strange choice of words since “science and empire” has become a common, almost clichéd, phrase in the history of science and science technology studies (STS). The phrase took hold in the 1970s when Marxist scholarship revealed the exploitative functions of imperial science and gained inspiration from other critiques such as Edward Said’s Orientalism in 1978.
By the 1980s, books and articles containing “science” and “empire” blossomed in the scholarly press. Yet the phrase has since witnessed a slow decline, as scholars have grown uneasy with portrayals of colonial science as a hegemonic expression of European power. Replacement terms tend to emphasize the reciprocal relationships in the production of science. Most notable among these is “Atlantic World,” a term that now races like a forest fire through history of science titles, probably due to Bernard Bailyn’s influential Seminar in the History of the Atlantic World which he instituted at Harvard in 1995. Why, then, marry “Science and Empire” with “Atlantic World” together in one title?
The answer comes from the function of “empire” within this edited collection. All twelve essays here challenge empire, or more precisely, an imperial top-down model of science in describing the Atlantic World. The “Empire” of the title, in other words, does not represent a historic process to be revealed, but a historiographic concept to be critiqued, a goal that Dew and Delbourgo accomplish with devastating efficiency. By focusing on famous “heroic narratives of discovery” (5) Delbourgo and Dew argue, studies of imperial science have missed the day-to-day activities which shaped the study of nature in the Atlantic World. In other words, historians of science (including me) have grown too comfortable thinking of Atlantic science through the image of a sextant-wielding Baron von Humboldt.
As Science and Empire demonstrates, knowledge of the Atlantic World depended upon the labors of far lesser-known figures: sailors, surgeon-barbers, Creole collectors, and diasporic Africans among others. Most essays go beyond describing the actions of these invisible networks, connecting them with better known ones.
Alison Sandman, for example, explains how pilots competed with learned cosmographers to control cartographic knowledge in early modern Spain. Júnia Ferreira Furtado’s essay, focused on Brazil, shows how Dutch surgeon-barbers “broke the monopoly of erudite knowledge enjoyed by doctors,” (Furtado, 132) giving tropical medicine a pronounced, empirical tilt. Even well known figures are not what they appear. Joyce Chaplin revisits Benjamin Franklin, poster-child of elite science, to show how he relied upon the reports of sailors and sea captains in describing the Atlantic “Gulph Stream.”
Taken together, the essays portray Atlantic science differently than the influential center-periphery model of science described by Bruno Latour in Science in Action (1987). Within Latour’s model, knowledge of the world starts and ends in the metropole where men of science provide the questions and instruments needed to understand nature at the edges of empire. While Latour’s system works well in describing many aspects of state-sponsored expeditions, it fails to explain other types of knowledge networks.
For one thing, Atlantic networks were unstable. As Neil Safier explains in tracing the work of French naturalist Joseph de Jussieu, acquiring and transmitting information was a precarious business. “The successful circulation of information from one point in the Atlantic to another was often dependent on circumstances that could just as easily go wrong as right” (Safier, 219). The networks developed by Spanish botanical expeditions, as described by Daniela Bleichmar, were of sturdier stuff. Yet Bleichmar points out other weaknesses in the Latourian model, specifically how “periphery” is a term ill-suited to describe botanical science in the Americas: “Circulation [of information] did not resemble the flight of a boomerang, always returning to the center, but rather a more reciprocal paddle game. Every letter or shipment from one side provoked a reply from the other.” (Bleichmar, 239). While European “centers” were important – no one disputes the asymmetries in power between mother country and colonies – they were dependent upon colonial peoples’ cooperation. This was not merely a question of finding Indians and Africans to collect things. As Susan Scott Parrish and Ralph Bauer point out in essays on diasporic Africans and Native American magic, respectively, Europeans adapted indigenous knowledge systems to make sense of an occult, magical nature. If London, Paris, and Madrid operated as hubs of scientific calculation, they were centers shaped by the world wheeling around them.
With such a strong theme linking all the essays, Science and Empire does not really need section headings. I found the four offered — “Networks of Circulation,” “Writing an American Book of Nature,” “Itineraries of Collection,” and “Contested Powers” – too vague to be useful. There are fruitful subordinate themes that track across essays, such as the tension between theory and empiricism (Sandman, Bauer, Furtado, Barrera-Osorio) and environmental history and technology (Golinski, Dew, Delbourgo, and Regourd). Still this is a minor quip. Dew and Delbourgo have managed to square the circle of edited collections: bringing together a diverse set of essays to target an important historiographical issue.
This review will be published in the upcoming issue of the British Journal for the History of Science. My thanks to BJHS for permission to reprint it here.