Time to Eat the Dogs

On Science, History, and Exploration

Field Notes: Moscow

In the Moscow Metro

As someone who studies travel, and loves to travel, it still makes me feel self-conscious at times to BE a traveler. This is because the experience of travel rarely feels purely experiential to me: sensations of places, people, and things are always mediated by ideas of travel, by my awareness of the histories of exploration, contact, and encounter. In short, I rarely seem to find the “raw feed” of the travel experience. When it arrives, there is always a news ticker scrolling somewhere at the bottom of the screen.

I imagine that psychologists go through something like this when talking to their therapists. Does the shrink-on-the-couch recall her experiences as pure sensory feed, a chronicle of acts and feelings that follow in easy succession? I doubt it. She must be analyzing these acts as she recalls them, “shrinking” her own actions as she relates them to her therapist.

Chris McCandless

This is not unique. One does not have to be a historian of exploration or therapist to weave third-person narratives out of one’s first-person encounters. Chris McCandless, idealistic explorer of Jon Krakauer’s book Into the Wild, writes a journal about his travels as if they were happening to someone else, an else that McCandless even gives a name “Alex Supertramp.”

In this conscious creation of an alternate self, McCandless is tipping his hand: travel is not really the subject of his drama; it is merely the stage for self-discovery, a platform upon which he (Chris/Alex) acts the part of heroic protagonist and omniscient narrator. Not all travelers are so dramatically inclined, but it does lead us to a broader question: can we ever experience the journey as  raw feed of new experience? Is it ever possible to turn the news ticker off really?

And if travel experiences are unavoidably hybrid and impure, jumbled together with  ideas about places and our ideas about ourselves, why do they still manage to affect us so deeply? These were the questions that took hold of me in Moscow as I prepared for my journey to  Naryan-Mar, a small Arctic city near the Barents Sea in a region known as the Nenets Autonomous Okrug (NAO). I was heading there for the Arctic Perspectives XXI Conference, an international, interdisciplinary gathering of scholars talking about about circumpolar issues in the far north.

By any literary measure, my 48 hours in Moscow were uneventful. I did not get arrested. I wasn’t poisoned or ransomed. I did not suffer from a temporary bout of amnesia. Nor did Moscow resemble the exotic communist fantasy of the Western imagination. The Moscow of 2010 is not Stalin’s Moscow or Gorbachev’s or even Yeltsin’s. It is a modern European capital ablaze with neon, populated by chic clubs and fast food restaurants.

House of the Embankment (Dom Naberezhnoy) during rush hour

And yet I found it amazingly, frighteningly, marvelously other. It was most dramatically other in language. I do not speak more than 50 words of Russian and can only slowly decipher Cyrillic script. But the experience of foreignness went deeper, attaching itself to little things rather than big ones.  Circular outlets. Underground crosswalks. Conventions of dress. Codes of conduct in the metro. On the train. In the airport. In the cafe. These are the quotidian marvels that every traveler experiences, the state of being a stranger in a strange land.

Outlet in my hostel room

Yet however small, they are pervasive and all-encompassing.  At one level, the traveler sees Moscow as the Muscovite sees it:  a landscape of imperial edifices and perpetual motion. Yet for me, it was a landscape of little differences, some of them comprehensible, others not.  Yet more affecting was the sense that these surface differences – in language, brand names, architectural style – were the thinnest film over a great well of difference lurking beyond the visible.  Who do Russians watch at 11:30pm while Americans are watching the Late Show? When do they file their taxes? What are the Russian equivalents of the televangelist, the road trip, the visit to Burning Man? And if there are no equivalents, what are the Russian customs which defy cultural translation? What are the rhymes that parents sing to their children at night?

How can any traveler fathom the depths of such difference? How many years in Moscow would be required to apprehend it from the inside? And in this feeling of profound otherness,  I suspect there is an answer – a partial answer at least – to the question of why we travel even as we are constantly trying to analyze and box what we experience. We adopt these analytical modes — seeing oneself as a character, comparing experiences with other events, recalling background literature — because the experience of difference would otherwise overwhelm us.  They give some order – a shabby, imperfect order – to the flood of unfamiliar sights and sounds.

Even those of us who enjoy the vertigo of travel – this feeling of incomprehensible otherness – still need this crutch I think, a way of organizing what we see so we are capable of functioning. And perhaps it is more than just a crutch too. Because in rendering it in familiar terms, the travel experience becomes integrated into the world back home, a part of us despite its unfathomable nature.

Next post: Field Notes: Naryan-Mar


Travel Tips on raveable

The Secluded Parts of the Soul

Solitary Tree, Caspar David Friedrich, 1822

Tim Noakes has learned many things from his journeys, most of them personal rather than geographical. About humility, honesty, perseverance. Not all of the lessons have been easy. They “taught me a heightened degree of self-criticism and self-expectation.”

Surrounded by fellow travelers, Noakes noticed things about them too. He saw patterns of behavior similar to his own “a love of privacy, an overwhelming desire for solitude, and an inability to relax or talk in company.”

Tim Noakes

They shared “mental behaviors that include daydreaming, absentmindedness, procrastination and … the eternal quest to understand the riddles of life.”

“The point is reached when fatigue drives us back into ourselves,” Noakes writes “into those secluded parts of our souls that we discover only under times of such duress and from which we emerge with a clearer perspective of the people we truly are.”

These are interesting points, if ones commonly invoked in the literature of exploration and adventure. What makes Noakes’ points particularly interesting though is that he is not describing explorers or adventurers.

He is talking about runners.

“Runners have been shown to score higher on psychological scales that measure needs for thrill and adventure, and one study has suggested that running may be an important method for thrill and adventure seekers to acquire sufficient sensory input to keep their needs satisfied.”

As I read this in Noakes’ thousand-page book, The Lore of Running, I took notice. It not only spoke to my interests as a scholar, but profiled me precisely as a runner.

I’ve loved running since high school,  when I abandoned swimming for cross country. I didn’t think much of the switch at the time. Swimming was boring. In the blue nothingness of the pool, I felt like I was exercising in space, struggling to feel something  in a sensory deprivation tank.

Running cross country, by contrast, set me on fire. The world moved by so quickly, in a blur of forest, root, and field. And each race was a story, a chase up and down muddy paths. It hurt. It was exhausting. It filled me up.

After high school, I ran less. Or more accurately, I ran too much and, once hurt, stopped altogether. In college and during my years in Egypt, I gave into other pursuits, found other ways of reaching “the secluded parts of the soul.” And since then I’ve oscillated between lifestyles active and passive, running, resting, running again.

But over the last twelve months  I’ve come back to the road with new resolve. I’ve been offering the same justifications for it that I always have: It’s good for me. It calms me down.

Yet reading Noakes makes me realize that there are deeper motives too.

Thrill-seeking and risk taking were once a big — perhaps the biggest — part of my life, but it’s a part that I’ve had to alter to fit with my other roles as husband, father, and professor. This is a common experience, I know. We are led to believe that perilous experience, both physical and emotional, is a young person’s game, that age induces caution as if it were encoded into our DNA, as biologically determined as a receding hairline.

But is it?

Does the hunger for transcendent experience really fade? Or is it that our lives become more complicated, forcing this desire to become something else? Do we lose it, or simply transmute it into something less volatile, something that will fit within the structures of the middle-adult life without blowing it apart?

The explorer risks death. He spends months or years away from home. But the scholar of exploration sleeps at home at night, works inside, gives himself over to the experience of hypoxia and frostbite without trips to the hospital.

So too the act of running creates a world of thrills with comparatively few risks. Most of the time my runs are routine, but every few weeks there are moments which are special, even transcendent.

Last November, I went running in Phoenix at dawn. Mexican fan palms towered above me on each side of 9th Avenue. Even in the middle of the city, the mountains and sky seemed everywhere, the world so big and quiet that I felt, as Noakes would put it “driven back into myself” an experience of beauty and aloneness so profound I had to stop and give myself over to it. Maybe this was really the point, the true object of my perpetual motion. I thought about this for a minute. Then I kept running.


100,000

Time to Eat the Dogs logged its 100,000th view yesterday. It’s only a matter of time now before TBS offers me a late-night show on exploration, but I want you to know that I’m not going to let it go to my head.

Let’s face it: the biggest blogs rack up 100,000 views in a few hours. And the 100,000 number doesn’t say anything about the viewers who found their way here through some horrible accident (such as searching for “naked men at sea” or “cooking squirrel”).

Still, I’m happy. I started this blog because I was frustrated with the way people talk about exploration, or more precisely, the way they weren’t talking about it.

While scholars from many fields study exploration, most of their discussions tend to be tightly confined. Debates unfold in the pages of poorly subscribed, peer-reviewed journals. And while academic communities talk past each other, they are also talking past the public: the thousands of people who are fascinated by exploration, who read articles and buy books on the Apollo missions, Christopher Columbus, Ernest Shackleton, and Lewis and Clark.

Blogging here has put me in touch with people from all of these different communities. And their feedback has influenced my work. It has changed scope of my projects and the way that I think about them.

So thanks for stopping by, offering suggestions, leaving comments.

Thanks too to the other bloggers who have given guidance and support: Ting & Tankar, Deep Sea News, Ether Wave Propaganda, World’s Fair, Dispersal of Darwin, The Beagle Project, The Renaissance Mathematicus, and others.

You’ve made me a better scholar in the process.

First, Fastest, Youngest

Edmund Hillary and Tenzing Norgay on Mt Everest, 1953

Firsts have always been important in exploration. This seems rather straightforward, even tautological, to say since being first is woven into the definition of exploration. After all, traveling to unknown places is  doing something that hasn’t been done before (or at least hasn’t been reported before). And this is how the history of exploration often appears to us in textbooks and timelines: as lists of expeditionary firsts from Erik the Red to Neil Armstrong.

In truth, though, firsts are fuzzy.

Some fuzziness comes from ignorance, our inability to compensate for the incompleteness of the historical record. This is a perennial problem in history in general and history of exploration in particular. (I call it a problem but it’s actually what makes me happy and keeps me employed).

Was Christopher Columbus the first European to reach America in 1492? Probably not, since evidence suggests that Norse colonies existed in North America five hundred years before he arrived. Was Robert Peary the first to reach the North Pole in 1909? It’s hard to say since Frederick Cook claimed to be first in 1908 and its possible that neither man made it.

Simon Bolivar

Some fuzziness comes from the different meanings we give to “discovery.” The South American leader Simon Bolivar called Alexander von Humboldt “the true discoverer of America.” Bolivar did not mean this literally since Humboldt traveled through South America in 1800, 17 years after Bolivar himself was born there, 300 years after Columbus first arrived in the Bahamas, and about 16,000 years after Paleo-Indians arrived in America, approved of what they saw, and decided to stay.

But for Bolivar, Humboldt was the first person to see South America holistically: as a complex set of species, ecosystems, and human societies, held together by faltering colonial empires. Being first in exploration, Bolivar realized, meant more than planting a flag in the ground.

Neil Armstrong on the Moon, 1969

At first glance, we seem to have banished fuzziness from modern exploration.  For example, there is little doubt that Neil Armstrong was the first human being to set foot on the moon since the event was captured on film and audio recordings, transmitted by telemetry, and confirmed by material artifacts such as moon rocks. (Moon hoax believers, I’m sorry. I know this offends.) Were the Russians suddenly interested in challenging Armstrong’s claim to being first, they would have a tough time proving it since Armstrong could give the day and year of his arrival on the moon  (20 July 1969) and even the exact hour, minute, and second when his boot touched the lunar surface (20:17:40 Universal Coordinated Time).

But this growing precision of firsts has generated its own ambiguities. We have become more diligent about recording firsts precisely because geographical milestones have become more difficult to achieve. As a result, there has been a shift from firsts of place to firsts of method. As the forlorn, never-visited regions of the globe diminish in number, first are increasingly measured by the manner of reaching perilous places rather than the places themselves.

Reinhold Messner

For example, Tenzing Norgay and Edmund Hillary were the first to ascend Mt. Everest in 1953, but Reinhold Messner and Peter Habeler were the first to climb the mountain without oxygen in 1978. In 1980, Messner achieved another first, by ascending Everest without oxygen or support.

Now as “firsts of difficulty” fall,  they are being replaced by “firsts of identity.” James Whittaker was the first American to summit Everest in 1963. Junko Tabei was the first woman (1975). Since then, Everest has spawned a growing brood of “identity first” summits including nationality (Brazil, Turkey, Mexico, Pakistan), disability (one-armed, blind, double-amputee) and novelty (snowboarding, married ascent, longest stay on summit).

It would be easy to dismiss this quest for firsts as a shallow one, a vainglorious way to achieve posterity through splitting hairs rather than new achievements. But I don’t think this is entirely fair. While climbing Everest or kayaking the Northwest Passage may have little in common with geographical firsts in exploration 200 years ago, this is not to say that identity firsts are meaningless acts. They may not contribute to an understanding of the globe, but they have become benchmarks of personal accomplishment, physical achievements — much like running a marathon — that have personal and symbolic value.

Temba Tsheri Sherpa

Still, I am disturbed by the rising number of “youngest” firsts. Temba Tsheri was 15 when he summited Everest on 22 May 2001. Jessica Watson was 16 last year when she left Sydney Harbor to attempt a 230 day solo circumnavigation of the globe.  (She is currently 60 miles off Cape Horn). Whatever risks follow adventurers who seek to be the oldest, fastest, or the sickest to accomplish X,  they are, at least, adults making decisions.

Jessica Watson

But children are different. We try to restrict activities that have a high risk of injury for minors.  In the U.S. for example, it is common to delay teaching kids how to throw a curve ball in baseball until they are 14 for fear of injuring ligaments in the arm. Similar concerns extend to American football and other contact sports.

So why do we continue to celebrate and popularize the pursuit of dangerous firsts by minors? What is beneficial in seeing if 16-year-olds can endure the hypoxia of Everest or the isolation of 230 days at sea.  Temba Tsheri, current holder of youngest climber on Everest, lost five fingers to frostbite.

We must remember that to praise “the youngest” within this new culture of firsts, we only set the bar higher (or younger as it were) for the record to be broken again.  In California, Jordan Romero is already training for his ascent of Everest in hopes to break Tsheri’s age record. He is thirteen.

Prester John

For Europeans in the 1450s, the Western Ocean (or the Atlantic as we now call it) was a frightening place. Unlike the cozy, well-mapped Mediterranean which was surrounded by three continents, the Western Ocean was unbounded, poorly understood, and filled with dangers.

The dangers were not the threat of sea monsters or falling off the edge of the world. Medieval sailors and geographers understood that the earth was spherical. (The idea that they thought it was flat is a fantasy conjured up by Washington Irving in his 1828 biography of Christopher Columbus.)

The Canary Current (part of the North Atlantic gyre)

Rather, the real threat was the ocean itself. Expeditions that followed the West African coast had revealed strong winds and currents that made travel south (with the current) easy, but return extremely difficult, especially with vessels that could not tack close to the wind. By the 1430s, Europeans had even identified a spot on the West African coast, Cape Bojador, as the point of no return.

Cape Bojador, West African Coast

And yet Europeans, led by the Portuguese, continued to push further south despite this risk. They developed trade factories off the west coast of Africa which exchanged Europeans goods — horses, wool, iron — for gold, ivory, and slaves. And ultimately they followed the African coast around the Cape of Good Hope and into the Indian Ocean, reaching the Indies — the holy grail of luxury items — in 1498.

Route of Vasco da Gama, 1498

All of this makes European exploration seem logical and methodical, driven by the promise of riches. Yet Europeans were interested in more than slaves and spices. Africa attracted Europe’s attention because it was considered the most likely location of Prester John, legendary Christian king and potential ally in the fight against the Muslims who occupied the Holy Land.

Historians have long placed Prester John within the category of myth, and in so far as myths describe “traditional stories, usually concerning heroes or events, with or without a determinable basis of fact” I suppose Prester John qualifies.

But “myth” has subtler, darker meanings.  The world is filled with traditional stories  that have a tenuous relationship to observable facts:  the Gospels, the Koran, and the Torah are filled with them. Yet we describe these stories as “beliefs” out of faith or respect. We usually reserve the word “myth,” however, for those stories — unicorns, leprechauns, a living Elvis  — that we dismiss as untrue.

The point here is not to say that Prester John was real, but to say that in characterizing him as a mythic figure, historians have tended to discount his serious influence on European exploration and discovery.

This is a central argument of historian Michael Brooks in his excellent thesis, Prester John: A Reexamination and Compendium of the Mythical Figure Who Helped Spark European Expansion. Brooks shows that, while it might be clear in hindsight that Prester John was more fable than reality, it was not clear to Europeans in the 15th and 16th centuries, all of whom could point to multiple accounts of the Christian king from different, trustworthy sources. The Travels of Sir John Mandeville, one of the most popular books in late medieval Europe, even offers a first-hand account of Prester John’s palace:

He dwelleth commonly in the city of Susa. And there is his principal palace, that is so rich and so noble, that no man will trow it by estimation, but he had seen it. And above the chief tower of the palace be two round pommels of gold, and in everych of them be two carbuncles great and large, that shine full bright upon the night. And the principal gates of his palace be of precious stone that men clepe sardonyx, and the border and the bars be of ivory. [Mandeville quoted in Brooks, 87]

On the basis of these multiple, mutually supportive documents, Dom Henrique (Henry the Navigator) charged his explorers to bring back intelligence about the Indies and of the land of Prester John. This was not merely an addendum to their orders for geographical discovery. Argues Brooks:

Without the lure of making political connections with the supposed co-religionist Prester John in the struggle against the Islamic world, the European history of overseas expansions would likely have taken a different course [3].

This serious, sustained interest in Prester John helps explain the longevity of the legend well into the seventeenth century.  I could not help seeing many similarities in Brooks’ account of Prester John with other stories of exploration. The one I have written the most about, the theory of the open polar sea, has also been discounted by historians as “myth” even though it was taken very seriously by scientists, explorers, and geographers in the nineteenth century, shaping the missions of numerous explorers.

Brooks’ thesis is available in pdf here.

He also posts a number of articles and reviews on history and exploration on his blog, historymike.

Thanks Research Blogging!

Research Blogging Awards 2010

Time to Eat the Dogs has been named a 2010 finalist for best blog in Social Science and Anthropology by Research Blogging. The awards panel received four hundred nominations and then selected 5 to 10 of the best blogs in each field.

For those who don’t know about Research Blogging, it is a site for “identifying the best, most thoughtful blog posts about peer-reviewed research.” They have over 1000 registered blogs and an archive of 950 research based blog posts.

Registered bloggers at Research Blogging.org will begin voting for winners in each category on 4 March, so if you are a serious blogger and fan of Time to Eat the Dogs, please register and vote.

NASA and the Ghosts of Explorers Past

Shadow Figure, image credit: Sworddancer Krusnik @ DeviantArt.com

By Michael Robinson and Dan Lester

NASA has always stood at the fulcrum of the past and future. It is the inheritor of America’s expeditionary legacy, and it is the leading architect of its expeditionary path forward. Yet the agency has found it hard to keep its balance at this fulcrum. Too often, it has linked future projects to a simplistic notion of past events. It has reveled in, rather than learned from, earlier expeditionary milestones. As NASA considers its future without the Constellation program, it is time to reassess the lessons it has drawn from history.

Statue of the 1804 Discovery Corps party, Kansas City, MO.

For example, when U.S. President George W. Bush unveiled the Vision for Space Exploration (VSE) in 2004, the administration and NASA were quick to link it to the 200th anniversary of the Lewis and Clark expedition, stating in the vision: “Just as Meriwether Lewis and William Clark could not have predicted the settlement of the American West within a hundred years of the start of their famous 19th century expedition, the total benefits of a single exploratory undertaking or discovery cannot be predicted in advance.” In Lewis and Clark, NASA saw a precedent for the Vision for Space Exploration: a bold mission that would offer incalculable benefits to the nation.

Yet this was a misreading of the expedition. The Lewis and Clark expedition did not leave a lasting imprint on Western exploration. The expedition succeeded in its goals, to be sure, but it failed to communicate its work to the nation. The explorers’ botanical collections were destroyed en route to the East Coast, their journals remained long unpublished, and the expedition was ignored by the press and public for almost a century. In 1809, 200 years ago last September, a despondent Lewis took his own life. NASA might do well to reflect on this somber anniversary in addition to the more positive one used to announce the Vision for Space Exploration in 2004. Doing exploration, Lewis reminds us, often proves easier than communicating its value or realizing its riches.

Robert Peary in Battle Harbor. George Grantham Bain Collection, Library of Congress.

NASA should also remember the anniversary of Robert Peary’s expedition to reach the North Pole, completed a century ago last September. Peary’s expedition, like the ones envisioned by the Vision for Space Exploration, was a vast and complicated enterprise involving cutting-edge technology (the reinforced steamer Roosevelt) and hundreds of personnel. Peary saw it as “the cap & climax of three hundred 300 years of effort, loss of life, and expenditure of millions, by some of the best men of the civilized nations of the world; & it has been accomplished with a clean cut dash and spirit . . . characteristically American.”

Yet Peary’s race to the polar axis had little to offer besides “dash and spirit.” Focused on the attainment of the North Pole, his expedition spent little time on science. When the American Geographical Society (AGS) published its definitive work on polar research in 1928, Peary’s work received only the briefest mention. Indeed, the Augustine committee’s statement that human exploration “begin should begin with a choice of about its goals – rather than a choice of possible destinations” would have applied itself equally well to the race to the North Pole as it does the new did recent plans to race to the Moon.

Galileo's illustrations of the Moon, taken from telescopic observations, Siderius Nuncius, 1610.

But the most important anniversary for NASA to be considering is the recent 400th anniversary of Galileo’s publication of “Sidereus Nuncius” (“Starry Messenger”), a treatise in which he lays out his arguments for a Sun-centered solar system. Was Galileo an explorer in the traditional sense? Hardly. He based his findings upon observations rather than expeditions, specifically his study of the Moon, the stars, and the moons of Jupiter. Yet his telescopic work was a form of exploration, one that contributed more to geographical discovery than Henry Hudson’s ill-fated voyage to find the Northwest Passage made during the same year. Galileo did not plant any flags in the soil of unknown lands, but he did something more important: helping to topple Aristotle’s Earth-centered model of the universe.

As NASA lays the Constellation program to rest, the distinction between “expedition” and “exploration” remains relevant today.While new plans for human space flight will lead to any number of expeditions, it doesn’t follow that these will constitute the most promising forms of exploration. Given our technological expertise for virtual presence – an expertise that is advancing rapidly – exploration does not need to be the prime justification for human space flight anymore.

The Augustine committee has shown the courage to challenge the traditional view of astronauts as explorers in its “Flexible Path” proposal, a plan to send humans at first into deep space, perhaps doing surveillance work on deep gravity wells, while rovers conduct work on the ground. Critics have derisively called it the “Look But Don’t Touch” option, one that will extend scientific exploration even if it does not include any “Neil Armstrong moments.”

Yet perhaps 2010 is the year when we challenge the meaning of “exploration.” For too long, NASA has been cavalier about this word. Agency budget documents and strategic plans continue to use it indiscriminately as a catch-all term for any project that involves human space flight. Yet this was not always the case. The National Aeronautics and Space Act of 1958, the formal constitution of the agency, doesn’t mention the word in any of the eight objectives that define NASA’s policy and purpose. Rather, NASA’s first directive is “the expansion of human knowledge of the Earth and of phenomena in the atmosphere and space.”

Geysers in the southern hemisphere of Enceladus, photographed by Cassini spacecraft, 27 November 2005

Perhaps the best way forward, then, starts with a more careful look back. The world has changed since Lewis and Clark, with technology that would have stunned the young explorers. In the year of “Avatar,” we need to think differently about the teams who direct rovers across the martian landscape, pilot spacecraft past the geysers of Enceladus and slew telescopes across the sky. These technologies are not static in their capabilities, nor as are the humans who control them. Their capabilities advance dramatically every year, and the public increasingly accepts them as extensions of our intellect, reach, and power. As Robert Peary’s quest for the North Pole illustrates, toes in the dirt (or in his case, ice) don’t necessarily yield new discoveries.

Of course robots and telescopes can’t do everything. A decision that representatives of the human species must, for reasons of species survival, leave this Earth and move to other places would make an irrefutable case for human space flight. But that need has never been an established mandate. It isn’t part of our national space policy. As we celebrate NASA’s 50th anniversary, NASA begins its sixth decade, do we have the courage to look beyond our simplistic notions of exploration’s past to find lasting value in the voyages of the future?

Michael Robinson is an assistant history professor at the University of Hartford’s Hillyer College in Connecticut. Dan Lester is an astronomer at the University of Texas, Austin.

This essay appears here courtesy of Space News where it was published on 8 February 2010.

Follow

Get every new post delivered to your Inbox.

Join 1,644 other followers