The town of Naryan-Mar sits at the apex of the Pechora River delta in the Russian European Arctic. It is the capital of the Nenets Autonomous Okrug (NAO), a Florida-sized region of tundra and taiga at 69° N.
In late June, I attended the Arctic Perspectives XXI conference in Naryan-Mar along with participants from Russia, Finland, Canada, and the U.S. We represented many fields: history, geography, anthropology, medicine, and Arctic studies.
Flying into Naryan Mar reminded me of flying into Barrow Alaska at 71° N: both are set in flat landscapes, surrounded by green, marshy tundra that extends to the horizon. Thousands of lakes and ponds mark both regions, and in Naryan-Mar, the Pechora River and its tributaries coil and swirl their way north, like some vast design from the Book of Kells.
Once on the ground, though, it was easy to see the differences between the two towns. The population of Naryan Mar is three times larger than Barrow (which is just over 5,000). Naryan Mar has trees, albeit small ones, while Barrow — at least the Barrow I remember visiting in 2004 – has few life forms apart from dogs, residents, and tundra.
Both cities have benefited from the oil economy of the Arctic, but Naryan-Mar shows the flush of oil money: a new civic center, flat-screen public displays, a new cultural center, and a significant port center.
Naryan-Mar also feels more European. While 61% of Barrow residents are Inupiat, only 11% of the NAO are Nenets, the reindeer-herding people who have inhabited this region since the 12th century. (I could not find any demographics for Naryan-Mar itself, but it seemed to follow this ratio). 67% of Naryan-Mar residents are Caucasian.
The conference sessions presented some terrific work — on polar medicine, energy competition and independence, Arctic nationalism, ecology, and history — yet the differences among the papers made it difficult to find threads strong enough to hold them all together thematically.
Ultimately, though, the conference succeeded less as a place to exposit work than to explore ideas. I offered an paper on the methodology of studying Arctic exploration (which I will present in a later post), a subject that may have been relevant to — at most — one or two other participants. And I did not hear any papers that had a direct bearing on my current work.
But the papers – and especially the participants – made a deep impression on me. The feeling of otherness that I described in my last post on Moscow seemed to be reversed in Naryan-Mar. I was surrounded by scholars who love the Arctic, who come to it from different countries and different perspectives, and who desired to communicate their work (and something of themselves) to their peers.
I will never forget it.
As someone who studies travel, and loves to travel, it still makes me feel self-conscious at times to BE a traveler. This is because the experience of travel rarely feels purely experiential to me: sensations of places, people, and things are always mediated by ideas of travel, by my awareness of the histories of exploration, contact, and encounter. In short, I rarely seem to find the “raw feed” of the travel experience. When it arrives, there is always a news ticker scrolling somewhere at the bottom of the screen.
I imagine that psychologists go through something like this when talking to their therapists. Does the shrink-on-the-couch recall her experiences as pure sensory feed, a chronicle of acts and feelings that follow in easy succession? I doubt it. She must be analyzing these acts as she recalls them, “shrinking” her own actions as she relates them to her therapist.
This is not unique. One does not have to be a historian of exploration or therapist to weave third-person narratives out of one’s first-person encounters. Chris McCandless, idealistic explorer of Jon Krakauer’s book Into the Wild, writes a journal about his travels as if they were happening to someone else, an else that McCandless even gives a name “Alex Supertramp.”
In this conscious creation of an alternate self, McCandless is tipping his hand: travel is not really the subject of his drama; it is merely the stage for self-discovery, a platform upon which he (Chris/Alex) acts the part of heroic protagonist and omniscient narrator. Not all travelers are so dramatically inclined, but it does lead us to a broader question: can we ever experience the journey as raw feed of new experience? Is it ever possible to turn the news ticker off really?
And if travel experiences are unavoidably hybrid and impure, jumbled together with ideas about places and our ideas about ourselves, why do they still manage to affect us so deeply? These were the questions that took hold of me in Moscow as I prepared for my journey to Naryan-Mar, a small Arctic city near the Barents Sea in a region known as the Nenets Autonomous Okrug (NAO). I was heading there for the Arctic Perspectives XXI Conference, an international, interdisciplinary gathering of scholars talking about about circumpolar issues in the far north.
By any literary measure, my 48 hours in Moscow were uneventful. I did not get arrested. I wasn’t poisoned or ransomed. I did not suffer from a temporary bout of amnesia. Nor did Moscow resemble the exotic communist fantasy of the Western imagination. The Moscow of 2010 is not Stalin’s Moscow or Gorbachev’s or even Yeltsin’s. It is a modern European capital ablaze with neon, populated by chic clubs and fast food restaurants.
And yet I found it amazingly, frighteningly, marvelously other. It was most dramatically other in language. I do not speak more than 50 words of Russian and can only slowly decipher Cyrillic script. But the experience of foreignness went deeper, attaching itself to little things rather than big ones. Circular outlets. Underground crosswalks. Conventions of dress. Codes of conduct in the metro. On the train. In the airport. In the cafe. These are the quotidian marvels that every traveler experiences, the state of being a stranger in a strange land.
Yet however small, they are pervasive and all-encompassing. At one level, the traveler sees Moscow as the Muscovite sees it: a landscape of imperial edifices and perpetual motion. Yet for me, it was a landscape of little differences, some of them comprehensible, others not. Yet more affecting was the sense that these surface differences – in language, brand names, architectural style – were the thinnest film over a great well of difference lurking beyond the visible. Who do Russians watch at 11:30pm while Americans are watching the Late Show? When do they file their taxes? What are the Russian equivalents of the televangelist, the road trip, the visit to Burning Man? And if there are no equivalents, what are the Russian customs which defy cultural translation? What are the rhymes that parents sing to their children at night?
How can any traveler fathom the depths of such difference? How many years in Moscow would be required to apprehend it from the inside? And in this feeling of profound otherness, I suspect there is an answer – a partial answer at least – to the question of why we travel even as we are constantly trying to analyze and box what we experience. We adopt these analytical modes — seeing oneself as a character, comparing experiences with other events, recalling background literature — because the experience of difference would otherwise overwhelm us. They give some order – a shabby, imperfect order – to the flood of unfamiliar sights and sounds.
Even those of us who enjoy the vertigo of travel – this feeling of incomprehensible otherness – still need this crutch I think, a way of organizing what we see so we are capable of functioning. And perhaps it is more than just a crutch too. Because in rendering it in familiar terms, the travel experience becomes integrated into the world back home, a part of us despite its unfathomable nature.
Next post: Field Notes: Naryan-Mar
Tim Noakes has learned many things from his journeys, most of them personal rather than geographical. About humility, honesty, perseverance. Not all of the lessons have been easy. They “taught me a heightened degree of self-criticism and self-expectation.”
Surrounded by fellow travelers, Noakes noticed things about them too. He saw patterns of behavior similar to his own “a love of privacy, an overwhelming desire for solitude, and an inability to relax or talk in company.”
They shared “mental behaviors that include daydreaming, absentmindedness, procrastination and … the eternal quest to understand the riddles of life.”
“The point is reached when fatigue drives us back into ourselves,” Noakes writes “into those secluded parts of our souls that we discover only under times of such duress and from which we emerge with a clearer perspective of the people we truly are.”
These are interesting points, if ones commonly invoked in the literature of exploration and adventure. What makes Noakes’ points particularly interesting though is that he is not describing explorers or adventurers.
He is talking about runners.
“Runners have been shown to score higher on psychological scales that measure needs for thrill and adventure, and one study has suggested that running may be an important method for thrill and adventure seekers to acquire sufficient sensory input to keep their needs satisfied.”
As I read this in Noakes’ thousand-page book, The Lore of Running, I took notice. It not only spoke to my interests as a scholar, but profiled me precisely as a runner.
I’ve loved running since high school, when I abandoned swimming for cross country. I didn’t think much of the switch at the time. Swimming was boring. In the blue nothingness of the pool, I felt like I was exercising in space, struggling to feel something in a sensory deprivation tank.
Running cross country, by contrast, set me on fire. The world moved by so quickly, in a blur of forest, root, and field. And each race was a story, a chase up and down muddy paths. It hurt. It was exhausting. It filled me up.
After high school, I ran less. Or more accurately, I ran too much and, once hurt, stopped altogether. In college and during my years in Egypt, I gave into other pursuits, found other ways of reaching “the secluded parts of the soul.” And since then I’ve oscillated between lifestyles active and passive, running, resting, running again.
But over the last twelve months I’ve come back to the road with new resolve. I’ve been offering the same justifications for it that I always have: It’s good for me. It calms me down.
Yet reading Noakes makes me realize that there are deeper motives too.
Thrill-seeking and risk taking were once a big — perhaps the biggest — part of my life, but it’s a part that I’ve had to alter to fit with my other roles as husband, father, and professor. This is a common experience, I know. We are led to believe that perilous experience, both physical and emotional, is a young person’s game, that age induces caution as if it were encoded into our DNA, as biologically determined as a receding hairline.
But is it?
Does the hunger for transcendent experience really fade? Or is it that our lives become more complicated, forcing this desire to become something else? Do we lose it, or simply transmute it into something less volatile, something that will fit within the structures of the middle-adult life without blowing it apart?
The explorer risks death. He spends months or years away from home. But the scholar of exploration sleeps at home at night, works inside, gives himself over to the experience of hypoxia and frostbite without trips to the hospital.
So too the act of running creates a world of thrills with comparatively few risks. Most of the time my runs are routine, but every few weeks there are moments which are special, even transcendent.
Last November, I went running in Phoenix at dawn. Mexican fan palms towered above me on each side of 9th Avenue. Even in the middle of the city, the mountains and sky seemed everywhere, the world so big and quiet that I felt, as Noakes would put it “driven back into myself” an experience of beauty and aloneness so profound I had to stop and give myself over to it. Maybe this was really the point, the true object of my perpetual motion. I thought about this for a minute. Then I kept running.
Time to Eat the Dogs logged its 100,000th view yesterday. It’s only a matter of time now before TBS offers me a late-night show on exploration, but I want you to know that I’m not going to let it go to my head.
Let’s face it: the biggest blogs rack up 100,000 views in a few hours. And the 100,000 number doesn’t say anything about the viewers who found their way here through some horrible accident (such as searching for “naked men at sea” or “cooking squirrel”).
Still, I’m happy. I started this blog because I was frustrated with the way people talk about exploration, or more precisely, the way they weren’t talking about it.
While scholars from many fields study exploration, most of their discussions tend to be tightly confined. Debates unfold in the pages of poorly subscribed, peer-reviewed journals. And while academic communities talk past each other, they are also talking past the public: the thousands of people who are fascinated by exploration, who read articles and buy books on the Apollo missions, Christopher Columbus, Ernest Shackleton, and Lewis and Clark.
Blogging here has put me in touch with people from all of these different communities. And their feedback has influenced my work. It has changed scope of my projects and the way that I think about them.
So thanks for stopping by, offering suggestions, leaving comments.
Thanks too to the other bloggers who have given guidance and support: Ting & Tankar, Deep Sea News, Ether Wave Propaganda, World’s Fair, Dispersal of Darwin, The Beagle Project, The Renaissance Mathematicus, and others.
You’ve made me a better scholar in the process.
Firsts have always been important in exploration. This seems rather straightforward, even tautological, to say since being first is woven into the definition of exploration. After all, traveling to unknown places is doing something that hasn’t been done before (or at least hasn’t been reported before). And this is how the history of exploration often appears to us in textbooks and timelines: as lists of expeditionary firsts from Erik the Red to Neil Armstrong.
In truth, though, firsts are fuzzy.
Some fuzziness comes from ignorance, our inability to compensate for the incompleteness of the historical record. This is a perennial problem in history in general and history of exploration in particular. (I call it a problem but it’s actually what makes me happy and keeps me employed).
Was Christopher Columbus the first European to reach America in 1492? Probably not, since evidence suggests that Norse colonies existed in North America five hundred years before he arrived. Was Robert Peary the first to reach the North Pole in 1909? It’s hard to say since Frederick Cook claimed to be first in 1908 and its possible that neither man made it.
Some fuzziness comes from the different meanings we give to “discovery.” The South American leader Simon Bolivar called Alexander von Humboldt “the true discoverer of America.” Bolivar did not mean this literally since Humboldt traveled through South America in 1800, 17 years after Bolivar himself was born there, 300 years after Columbus first arrived in the Bahamas, and about 16,000 years after Paleo-Indians arrived in America, approved of what they saw, and decided to stay.
But for Bolivar, Humboldt was the first person to see South America holistically: as a complex set of species, ecosystems, and human societies, held together by faltering colonial empires. Being first in exploration, Bolivar realized, meant more than planting a flag in the ground.
At first glance, we seem to have banished fuzziness from modern exploration. For example, there is little doubt that Neil Armstrong was the first human being to set foot on the moon since the event was captured on film and audio recordings, transmitted by telemetry, and confirmed by material artifacts such as moon rocks. (Moon hoax believers, I’m sorry. I know this offends.) Were the Russians suddenly interested in challenging Armstrong’s claim to being first, they would have a tough time proving it since Armstrong could give the day and year of his arrival on the moon (20 July 1969) and even the exact hour, minute, and second when his boot touched the lunar surface (20:17:40 Universal Coordinated Time).
But this growing precision of firsts has generated its own ambiguities. We have become more diligent about recording firsts precisely because geographical milestones have become more difficult to achieve. As a result, there has been a shift from firsts of place to firsts of method. As the forlorn, never-visited regions of the globe diminish in number, first are increasingly measured by the manner of reaching perilous places rather than the places themselves.
For example, Tenzing Norgay and Edmund Hillary were the first to ascend Mt. Everest in 1953, but Reinhold Messner and Peter Habeler were the first to climb the mountain without oxygen in 1978. In 1980, Messner achieved another first, by ascending Everest without oxygen or support.
Now as “firsts of difficulty” fall, they are being replaced by “firsts of identity.” James Whittaker was the first American to summit Everest in 1963. Junko Tabei was the first woman (1975). Since then, Everest has spawned a growing brood of “identity first” summits including nationality (Brazil, Turkey, Mexico, Pakistan), disability (one-armed, blind, double-amputee) and novelty (snowboarding, married ascent, longest stay on summit).
It would be easy to dismiss this quest for firsts as a shallow one, a vainglorious way to achieve posterity through splitting hairs rather than new achievements. But I don’t think this is entirely fair. While climbing Everest or kayaking the Northwest Passage may have little in common with geographical firsts in exploration 200 years ago, this is not to say that identity firsts are meaningless acts. They may not contribute to an understanding of the globe, but they have become benchmarks of personal accomplishment, physical achievements — much like running a marathon — that have personal and symbolic value.
Still, I am disturbed by the rising number of “youngest” firsts. Temba Tsheri was 15 when he summited Everest on 22 May 2001. Jessica Watson was 16 last year when she left Sydney Harbor to attempt a 230 day solo circumnavigation of the globe. (She is currently 60 miles off Cape Horn). Whatever risks follow adventurers who seek to be the oldest, fastest, or the sickest to accomplish X, they are, at least, adults making decisions.
But children are different. We try to restrict activities that have a high risk of injury for minors. In the U.S. for example, it is common to delay teaching kids how to throw a curve ball in baseball until they are 14 for fear of injuring ligaments in the arm. Similar concerns extend to American football and other contact sports.
So why do we continue to celebrate and popularize the pursuit of dangerous firsts by minors? What is beneficial in seeing if 16-year-olds can endure the hypoxia of Everest or the isolation of 230 days at sea. Temba Tsheri, current holder of youngest climber on Everest, lost five fingers to frostbite.
We must remember that to praise “the youngest” within this new culture of firsts, we only set the bar higher (or younger as it were) for the record to be broken again. In California, Jordan Romero is already training for his ascent of Everest in hopes to break Tsheri’s age record. He is thirteen.
For Europeans in the 1450s, the Western Ocean (or the Atlantic as we now call it) was a frightening place. Unlike the cozy, well-mapped Mediterranean which was surrounded by three continents, the Western Ocean was unbounded, poorly understood, and filled with dangers.
The dangers were not the threat of sea monsters or falling off the edge of the world. Medieval sailors and geographers understood that the earth was spherical. (The idea that they thought it was flat is a fantasy conjured up by Washington Irving in his 1828 biography of Christopher Columbus.)
Rather, the real threat was the ocean itself. Expeditions that followed the West African coast had revealed strong winds and currents that made travel south (with the current) easy, but return extremely difficult, especially with vessels that could not tack close to the wind. By the 1430s, Europeans had even identified a spot on the West African coast, Cape Bojador, as the point of no return.
And yet Europeans, led by the Portuguese, continued to push further south despite this risk. They developed trade factories off the west coast of Africa which exchanged Europeans goods — horses, wool, iron — for gold, ivory, and slaves. And ultimately they followed the African coast around the Cape of Good Hope and into the Indian Ocean, reaching the Indies — the holy grail of luxury items — in 1498.
All of this makes European exploration seem logical and methodical, driven by the promise of riches. Yet Europeans were interested in more than slaves and spices. Africa attracted Europe’s attention because it was considered the most likely location of Prester John, legendary Christian king and potential ally in the fight against the Muslims who occupied the Holy Land.
Historians have long placed Prester John within the category of myth, and in so far as myths describe “traditional stories, usually concerning heroes or events, with or without a determinable basis of fact” I suppose Prester John qualifies.
But “myth” has subtler, darker meanings. The world is filled with traditional stories that have a tenuous relationship to observable facts: the Gospels, the Koran, and the Torah are filled with them. Yet we describe these stories as “beliefs” out of faith or respect. We usually reserve the word “myth,” however, for those stories — unicorns, leprechauns, a living Elvis — that we dismiss as untrue.
The point here is not to say that Prester John was real, but to say that in characterizing him as a mythic figure, historians have tended to discount his serious influence on European exploration and discovery.
This is a central argument of historian Michael Brooks in his excellent thesis, Prester John: A Reexamination and Compendium of the Mythical Figure Who Helped Spark European Expansion. Brooks shows that, while it might be clear in hindsight that Prester John was more fable than reality, it was not clear to Europeans in the 15th and 16th centuries, all of whom could point to multiple accounts of the Christian king from different, trustworthy sources. The Travels of Sir John Mandeville, one of the most popular books in late medieval Europe, even offers a first-hand account of Prester John’s palace:
He dwelleth commonly in the city of Susa. And there is his principal palace, that is so rich and so noble, that no man will trow it by estimation, but he had seen it. And above the chief tower of the palace be two round pommels of gold, and in everych of them be two carbuncles great and large, that shine full bright upon the night. And the principal gates of his palace be of precious stone that men clepe sardonyx, and the border and the bars be of ivory. [Mandeville quoted in Brooks, 87]
On the basis of these multiple, mutually supportive documents, Dom Henrique (Henry the Navigator) charged his explorers to bring back intelligence about the Indies and of the land of Prester John. This was not merely an addendum to their orders for geographical discovery. Argues Brooks:
Without the lure of making political connections with the supposed co-religionist Prester John in the struggle against the Islamic world, the European history of overseas expansions would likely have taken a different course .
This serious, sustained interest in Prester John helps explain the longevity of the legend well into the seventeenth century. I could not help seeing many similarities in Brooks’ account of Prester John with other stories of exploration. The one I have written the most about, the theory of the open polar sea, has also been discounted by historians as “myth” even though it was taken very seriously by scientists, explorers, and geographers in the nineteenth century, shaping the missions of numerous explorers.
Brooks’ thesis is available in pdf here.
He also posts a number of articles and reviews on history and exploration on his blog, historymike.
Time to Eat the Dogs has been named a 2010 finalist for best blog in Social Science and Anthropology by Research Blogging. The awards panel received four hundred nominations and then selected 5 to 10 of the best blogs in each field.
For those who don’t know about Research Blogging, it is a site for “identifying the best, most thoughtful blog posts about peer-reviewed research.” They have over 1000 registered blogs and an archive of 950 research based blog posts.
Registered bloggers at Research Blogging.org will begin voting for winners in each category on 4 March, so if you are a serious blogger and fan of Time to Eat the Dogs, please register and vote.