Nutritional Anthropology 3/3: Agriculture and Longevity

In this third and final post of the series, I examine how humans moved from hunter-gathering to agriculture and then from agriculture to pastoralism.  Although some communities have given up on agriculture to a very great extent, trade relations with neighbouring cities and communities continues to provide pastoralists with certain food commodities, notably tea.  Along the way, however, I note the Daoist advice to avoid the five grains if one is to achieve the lifespan of an immortal.  This post, then, functions as a counterpoint to the idea that all post-agricultural societies are doomed to chronic disease.

Evenki Reindeer Herder.  Photo by Chris Linder of Seattle, WA.

(III)

Once populations undertook agriculture, there was no going back, and we suffer the consequences to the present day. Discuss.

Although Cordain’s assertion that, “We have wandered down a path toward absolute dependence upon cereal grains, a path for which there is no return,” may seem overwrought, the development of agriculture has set the structures of society and culture in the civilised world for the past several thousand years.  Agricultural practices have altered human biology and health, human behaviour, and the material culture and social networks which stem from those behaviours.  In order to prove that once agricultural practices were adopted a return to previous modes of subsistence was not possible, I will look at three groups who seem to have made the shift away from a reliance on agriculture.  First, reliance on cereal grains was early recognised in China and advised against by Daoist hermits.  Such reliance was partially broken by nomadic pastoralists throughout Eurasia;  my second example therefore looks at pastoralists in the high plateau of Tibet and the third, perhaps most successful example, is provided by the reindeer herders of Siberia.  Despite the success of individual Daoists and small groups of pastoralists, elimination of cereal reliance was rarely if ever complete on societal levels.  The set point established by cereal agriculture, therefore, seems open to manipulation so that reliance on cereal grains ceases to a primary source of subsistence, but their cultivation is unlikely to be eliminated entirely.  Such manipulation, however, would likely not only be in response to environmental pressures, but also involve new and additional shifts in human behaviour, material culture, and social networks.

Agriculture can be distinguished from unadulterated gathering and from small-scale horticulture by agriculturalists’ reliance on the raising of domestic plants as the primary source of food.  (Harris 2004 [1996]:4).  The term ‘agricultural practices’ rather than ‘agriculture’ is meant to emphasise the shift in individual behavioural and societal patterns resulting from a reliance on crop production.    Bar-Yosef (1998), examined the emergence of the Levantine Natufian culture, one of the first groups to employ agricultural practices, and argues that the adoption of crop production was ‘the optimal strategy for semi-sedentary and sedentary hunter gatherers’ given the particular environmental changes during the late Pleistocene.  “The emergence of farming communities is seen as a response to the effects of the Younger Dryas on the Late Natufian culture in the Levantine Corridor. The beginning of intentional widespread cultivation was the only solution for a population for whom cereals had become a staple food. Domestication of a suite of founder crops came as the unintentional, unconscious result of this process.”  (Bar-Yosef 1998:174) When the younger Dryas period ended around 10,000 BP, cultivation of crops continued in larger villages, within which, Bar-Yosef writes, “the need for social cohesion motivated the maintenance of public ceremonies in addition to domestic rituals, the building of shrines, and the keeping of space for public activities.”  (Bar-Yosef 1998:174)  As these groups succeeded, they moved into other neighbouring areas, and Bar-Yosef cites the ‘multiplier effect’ which subsequently drove technological innovations and set the recorded history of the Fertile Crescent into motion.

The extent to which agriculturalists colonised surrounding areas, or to which neighbouring hunter-gatherer groups adopted agricultural practices is unknown. The former case represents simply the natural selection of a structure which ensures the greatest viability for a particular population given a certain set of environmental conditions.  Agricultural groups are simply more successful at large-scale population distribution and survival. The question of whether agricultural groups can make a subsequent shift therefore also comes with the implication of maintaining those levels of population distribution, density, and survival.

Societal changes were not the only result of agricultural practices.  Comparative analysis of skeletal remains has demonstrated several biological changes resulting from the adoption of agricultural practices.  A decrease in facial with concomitant crowding of the teeth, and a corresponding trend towards smaller tooth size are two morphological changes still present.  (Larsen 1995:196 – 197)  Other morphological changes are due to practices, rather than diet per se, and include lower bone resilience, but also lower rates of osteoarthritis among agricultural populations as compared to hunter gatherers.  (Larsen 1995:200)  Genetic change as a result of agriculture has occurred in the Middle East, with respect to an ability to digest wheat; however, this genetic change decreases in frequency the further from the Levant a population is located.  (Cordain 1999:50)  Overall, despite an increase in fertility and population density, health seems to have decreased with the adoption of agricultural practices.

To prove there was no going back after the adoption of agriculture, groups or individuals who have decried patterns of urbanisation and poor health, or who have opted out of sedentism, perhaps by moving into new climatic zones, should be examined.  Among those groups were medieval Daoists, who prescribed various practices to increase longevity and tranquillity.  Agriculture not only caused shifts in societal organisation, it also altered ideological concepts relating to these domains.  Daoism countered these ideologies with its own set of ‘naturalistic’ emphases.  Daoists are  often portrayed as taking an anti-societal view and legends abound about Immortals and hermits who sought to leave society behind in the cultivation of virtue.  The opening chapter of the Zhuangzi relates,

“In the mountains of far-off Ku-yi there lives a daemonic [shen; spirit] man, whose skin and flesh are like ice and snow, who is gentle as a virgin.  He does not eat the five grains but sucks in the wind and drinks the dew; he rides the vapour of the clouds, yokes flying dragons to his chariot, and roams beyond the four seas…”  (Englehardt 2000:110)

As Englehardt notes, this is the classic description of an immortal, “a being with a purified body, who uses a special diet without grains and has the ability to fly, to roam afar, and to heal.”  (ibid.)

Avoidance of the five grains wasn’t only advocated in legends.  Among the medical manuscripts found in the MaWangDui tombs and dated before 168 BCE is the Quegu shiqi (The Rejection of Grains and Absorption of Qi).  “It deals mainly with techniques of eliminating grains and ordinary foodstuffs from the diet and replacing them with medicinal herbs and qi through special breathing exercises.”  (Englehardt 2000:86; see also 102)  The implication is that these techniques were actually practiced in the Han dynasty, if only on a small scale, or only by the elite.  (Csikszentmihalyi 2000:65; Penny 2000:126).  Such dietary recommendations persisted into the Tang dynasty (Kohn and Kirkland 2000:353), and are even repeated in Daoist works today (Shipper 1993: 167)  Although abstention from grain was thought to weaken the three worms which ate one’s life, “it also is related to the immortals’ rejection of a settled agricultural life and its interrelations and various social duties in favour of an eremitic, mountain-dwelling, more floating existence.”  (Penny 2000:126)

Despite the manuscript evidence from China, Daoist grain avoidance practices never seem to have led to a full scale societal departure from crop-raising.  Pastoralism, however, provides insight into how a group can move away from agricultural practices while maintaining a larger social network than the Daoist hermits and immortals evinced.

Pastoralism seems to have emerged subsequent to the development of plant cultivation.  Like agriculture, Harris (2004 [1996]:4) describes a spectrum of pastoral behaviours ranging from predation (i.e. hunting) through protection (taming and ‘free range management’ of herds) to domestication.  Domsestication includes both animal husbandry by agriculturalists (who grow crops also to feed their animals) and transhumance or nomadic pastoralism.   Harris theorises that caprine pastoralism may have initially served as a buffer in case of crop failure. (Harris 2004 [1996]:556) Although the adoption of nomadic pastoralism allowed the exploitation of environments previously closed to agriculturalists, nomadic pastoralists, at least in Eurasia, never seem to have quite left behind their reliance on cereals produced by settled communities.

Goldstein’s research on the Tibetan plateau emphasised that pastoralists there have remained in their indigenous homeland, and have never been pushed to more marginal areas by agriculturalists.  (Goldstein 2002)  Yet even these groups trade for barley and tea.  Although their herds of yak, goats, sheep, and horses provide these pastoralists with both meat and milk, “roughly 50% of the nomads’ dietary calories derive from grains they secure from farming areas located 15 – 20 days’ walk to the southeast.”  (Goldstein 2002:133)  Thus, they remain tied to agricultural communities for their survival.

One group which has nearly left behind agriculture altogether are the Evenki and other reindeer herders in Siberia.  Classifying these groups as pastoralists or hunter-gatherers who manage animals is open to debate, since domesticated reindeer can easily return to the wild.  For Ingold, the difference between hunters and pastoralists is how the notion of property is applied to animals:  for pastoralists, living animals constitute their wealth; for hunters, dead ones. (Ingold 1986:5, 6)  Reindeer are both sources of labour and provide economic resources for the family in the form of milk for domestic consumption and antlers traded as medicine to Koreans and Chinese.  The Soviet Union even briefly considered setting up a reindeer dairy industry.  (Fondahl 1989)

Ingold (1986), in summarising his earlier work, advances the theory that reindeer herding resulted when pastoralists already familiar with the domestication of animals moved into southern Siberia.  This theory is supported by Uerman (2004 [1996]), who hypothesises that sheep and goats were domesticated first, and other animals were domesticated by those familiar with the concept of domesticating animals.  The movement, then, is from agriculture to pastoralism, to an even more extreme pastoralism.

If Evenki are accepted as pastoralists, their diet should be examined for evidence of agricultural reliance.  In his examination of contemporary Evenki growth patterns, Leonard et al (1994) indicate that while their stature and weight is low compared to US populations, this does not seem due to limited food availability.  However, their herds provide about 30% of total energy, and “the diet is supplemented with nonlocal foods (e.g. flour, noodles, sugar, tea) that are brought in by helicopter from regional centers.”  (Leonard et al 1994:346.)  Previous to this time, Evenki traded venison with coastal people to supply additional food substances, mostly marine mammal meat and fats.  (Kozlov et al 2007: chapter 5).  Traditionally, the Nenets, another reindeer herding people of the Russian north, “lived mainly by using what they could produce from their own herds:  venison, fat and edible intestines.  In addition, they added to their diet foods from hunting, river or lake fishing, gathering and sometimes, marine mammal hunting.”  (Kozlov et al 2007: chapter 5)  Because of their mobility, nomadic reindeer herders did not store food; thus the primary benefit conferred by grains – their ability to be stored up for times of need – would have functioned as a drawback.

In the sixteenth century, contacts with Slavic Russians led to the importation of flour, sugar, tea, and alchohol, but these were voluntary adoptions.  During the Soviet period, however, an agriculturally based ‘Soviet diet’ was imposed fairly uniformly throughout Siberia.  Traditional ways of preparing food were condemned, and the Soviet reliance on grains, poultry, and cattle products were praised and highly valued.  (Kozlov et al 2007: chapter 5)  Following the collapse of the Soviet system, and the consequent difficulty of importing foods from regional centres, previous culinary – and property – traditions have subsequently resurfaced.  (Leonard et al 2002:232)  Thus, reindeer herders seem to have been able to shift back to a hunting and exclusively pastoralist lifestyle.

The experience of the Evenki illustrates the second shift which occurred in food production during the late nineteenth and twentieth centuries.  This shift involved the commodification, mechanisation, and mass production of foods (and the consequent demise of the artisan, local, and found or foraged food practices); the emergence of inner cities, suburbs, and migrant slums; and the restriction of autonomous responsibility and local knowledge of survival from the land.  The Soviet diet introduced to reindeer herders in the twentieth century is a good example of this new, ‘industrialised’ diet, while policies which Beijing is imposing on the TAR with regard to land use is an example of restricting autonomous responsibility for land use and provisioning.  (Goldstein 2002)  The global trade of food and agriculturalist concerns over food security (derived in part from the concept of precious resources to be exploited, traded, or taken) support the continued existence of industrialised food  However, as the Evenki have demonstrated, so long as a people still maintains a knowledge of how to survive locally, a reverse shift away from agro-industrial dependence is possible if such networks begin to fail, as they did for the Evenki after fall of Soviet Union.

Summary

Agricultural practices seem to have been adopted in response to climate stress and a pre-existing semi-sedentary lifestyle.  The adoption of agricultural processes caused lasting shifts not only in population health, but also in the ways those societies organised themselves.  These changes may have allowed certain groups of agriculturalists to colonise and successfully supplant previous groups of hunter-gatherers.  Thus agricultural practices represent a successful survival strategy, although biological resistance to grains persisted in many populations which subsequently adopted the cultivation of cereals.  Yet because the origins of agriculture appear tied to climatic change, subsequent severe climate change may force yet another shift in how humans provide for their nutritional and caloric needs.

In medieval China, certain Daoist-oriented groups developed a social and medical ideology which discouraged the consumption of grain, and advocated a near government-less, spontaneous existence.  These groups supported a return to pre-agricultural organisational and societal patterns.   Legends developed around several individuals who left society and lived in the wild, having returned to a hunter-gatherer lifestyle, neither keeping animals nor cultivating plants.  Some of these individuals are still revered as Immortals in Daoist folk religion today.

Larger groups also were able to lessen their reliance on agricultural practices.  Subsequent to the ‘invention’ of agriculture, various animals were domesticated, perhaps as a buffer food source in the advent of crop failure.  The herding and taming of animals allowed some groups to detach themselves from a sedentary lifestyle and once again experience the benefits of increased mobility.  This mobility allowed the exploitation of habitats unsuitable to agriculture, such as the high plateau of Tibet and the forests of Siberia.  However, pastoralism only decreased, though did not eliminate, a reliance on agricultural food products.  Before the Soviet period, the reindeer herders of Siberia were perhaps most successful among the Eurasians on eliminating a reliance on cereal crops, supplementing their diets instead with marine products brought through trade with coastal peoples.  Although the dietary regimes and societal organisation of both Tibet and Siberia were affected by government policies during their respective communist eras, the reindeer herders seem to have been able to successfully return to their more locally subsistent patterns of living after government influence declined.

Therefore, while agriculture has broadly supported civilisation as we know it through cities and settled life, it does seem possible for individuals and small groups to move away from an exclusive reliance on cereal products for survival.  Such a shift seems more likely to occur in combination with other factors such as climate change and internal forces resulting not only from political decisions, but also from medical and market forces aware of consumers’ biological intolerance to certain grain products.

References

Bar-Yosef, O (1998).  “The Natufian culture in the Levant, threshold to the origins of agriculture.”  Evolutionary Anthropology 6:159.

Cordain, L (1999).  “Cereal grains:  humanity’s double edged sword” in Evolutionary Aspects of Nutrition and Health:  Diet, Exercise, genetics and Chronic Disease.  Simopoulos, A.P.  World Review of Nutrition and Dietetics vol 84.

Csikszentmihalyi, M (2000).  “Han Cosmology and Mantic Practises” in Kohn, L (ed).  Daoism Handbook.  Leiden:  Brill, 2000:53.

Englehardt, U (2000).  “Longevity Techniques and Chinese Medicine” in Kohn, L (ed).  Daoism Handbook.  Leiden:  Brill, 2000:74.

Fondahl , G (1989).  “Reindeer dairying in the Soviet Union.”  Polar Record 25 (155):285 –294.

Goldstein, M and Beall, C (2002).  “Changing pattern of Tibetan nomadic pastoralism” in Leonard, W and Crawford, M.  Human Biology of Pastoral Populations.  Cambridge Uni Press, 2002:131.

Harris, D (2004 [1996]).  The Origins and Spread of Agriculture and Pastoralism in Eurasia.  UCL Press.

Ingold, T (1986).  “Reindeer Economies:  And the Origins of Pastoralism.”  Anthropology Today, vol 2, No 4: 5 – 10.

Kohn, L and Kirkland R (2000).  “Daoism in the Tang (618 – 907) in Kohn, L (ed).  Daoism Handbook.  Leiden:  Brill, 2000:339

Kozlov, A; Vershubsky, G; Kozlova, M (2007).  “Indigenous Peoples of Northern Russia:  Anthropology and Health.”  International Association of Circumpolary Health, 2007. Vol 66, no 5:462.

Larsen, C S (1995).  “Biological changes in human populations with agriculture.”  Annual Review of Anthropology 24:185 – 213.

Leonard, W; Galloway, V; Ivakine, E; Osipova, L; Kazakovtseva, M (2002).  “Ecology, health and lifestyle change among the Evenki herders of Siberia” in Leonard, W and Crawford, M.  Human Biology of Pastoral Populations.  Cambridge Uni Press, 2002:206

Leonard, W; Katzmarzyk, P; Comuzzie, A; Crawford, M; Sukernik, R (1994).  “Evenki Growth Patterns.”  American Journal of Human Biology 6:339 – 350.

Minnegal M, and Dwyer PD (2007).  “Foragers, farmers and fishers:  responses to environmental perturbation.”  Journal of Political Ecology 14:34-57.

Penny, B (2000).  “Immortality and Transcendence” in Kohn, L (ed).  Daoism Handbook.  Leiden:  Brill, 2000:109.

Rosegrant, MW; Leach, N; Gerpacio, RV (1999).  “Alternative futures for world cereal and meat consumption.”  Proceedings of the Nutrition Society 58:219 – 234.

Shipper, Kristopher (1993).  The Taoist Body.  University of California Press.

Uerpmann, HP (1996).  “Animal domestication – accident or intention?”  in Harris, D.  The Origins and Spread of Agriculture and Pastoralism in Eurasia.  UCL Press, 2004 [1996]:227.

Advertisements

Nutritional Anthropology 2/3: Wage Earning Diets and Chronic Disease

Continuing my posts on nutritional anthropology and health, this second essay concentrates on the relationship between ‘disease transitions’ and agricultural-industrial changes in society.  The key point I would like to draw attention to is that chronic disease rose dramatically when diet became linked to wage earning at the expense of self-sufficiency through gardens, hunting, or foraging.

A Pima Woman and her baskets

(II)

Do current global trends in non-infectious disease fit with established frameworks of historical ‘disease transitions’ (e.g., McMichael 2001)? Why or why not?

When the established frameworks of historical ‘disease transitions’ are understood from the ecologic perspective of human-environmental equilibrium, current global trends in non-infectious diseases such as increased rates of type 2 diabetes, heart disease, and obesity not only reflect a current disequilibrium between humans and their environmental conditions (diet, technology, social structures, and materials) but also highlight additional questions of health sustainability. Evidence from ethnographies on Native American and Aboriginal populations not only highlight this process, but also show current global trends to be both reversible and often preventable. Finally, the existence of ‘Blue Zones’ (areas of high density of centenarians) demonstrate that these positive health changes are achievable in industrialised countries today.

McMichael (2001) notes three, broad-based historical infectious disease transitions, book-ended in the past by an initial emigration out of Africa, and in the contemporary world by a theorised ‘fourth transition.’ As McMichael writes,
“These three great historical transitions were processes of equilibration between, first, humans and animal species and, later, between regional human populations. As new ecological niches were created by changes in human cultural practices, microbes exploited those niches. As new contacts were made between previously isolated civilizations, infectious diseases were pooled.” (McMichael 2001:111)

According to his reading, the first disequilibrium occurred when humans began to create agriculturally and pastorally based economies with the concomitant rise of towns about 10,000 years ago. At that point, a favourable environment for the transmission of zoonotic bacteria and viruses to human hosts was created, and the age of infectious diseases was born. Because these population centres were spread out, over time a co-evolution occurred between humans and microbes in relatively contained regions which allowed human civilisation (in the sense of cities) to continue without major disruption; an equilibrium point between technology, social structure, and the microbial environment had been reached. However, with the expansion of these regions through trade and empire building, during which time previously isolated regions came into contact with one another, the second and third disease transitions came about leading to the spread of global epidemic disease, first throughout Eurasia, and then to the Americas.

The question examined by McMichael (2001) is whether the recent resurgence of infectious diseases, combined with the discovery of many new diseases in the past quarter century, are adequate evidence of disequilibration to posit a fourth great transition, in which infectious diseases become resurgent. McMichael offers the increased opportunities for the global transmission of disease as well as the niches created by new technologies in food processing and pharmaceuticals. (McMichael 2001:112-13). Additionally, he mentions changes in human behaviour brought about through changes in urbanisation patterns both with regard to sexual behaviour and to new medical interventions. Finally, he notes the role which climate change and ecological disruption (including large scale clearing of land and the loss of species biodiversity) due to human technologies continues to play in effecting this ‘fourth’ disease transition.

Olshansky (1986), in contrast, advances a different set of determinants for, as he terms them, epidemiologic transitions. Basing himself on Omran (1971), he writes, “The epidemiologic transition theory… was designed to provide a general picture of the major determinants of death that prevailed during several distinct periods in our epidemiologic history.” (Olshansky 1986:356) As such, the focus of these transition periods shifts away from ecological equilibrium factors and towards factors which preclude longevity and ‘natural death’. In Olshansky’s reconceptualization of the transitional periods, he breaks down the components examined in each period into three distinct groups: cause of death (e.g. parasitic, infectious, degenerative); age and sex of the deceased; and the effects on survival which a transition from one set of causes to another might have. In this last component, we have a concern not only with the ecological factors which can partially overlap with McMichael’s model, but also with the fundamental question “who benefits the most from mortality transitions in terms of gains in life expectancy?” (Olshansky 1986:356) This fundamental question raises concerns about the sustainability of any given health transition, and provides a link between his epidemiological approach and critical anthropology’s deconstruction of power and exploitation in capitalist world systems of health and sickness.

The common thread underlying each disease transition in both models seems to be disequilibrium caused by social change (including technology and hygiene) and nutrition, particularly in terms of its impact on both mortality and longevity. In the first period (McMichael’s first through third stages), the disequilibrium resulted from urbanisation, agriculture, and trade; likewise health gains can be attributable to the same causes. In the second period (overlapping only partially with McMichael’s fourth stage), the shift was caused by increased nutrition and clearly changing social policies towards public sanitation. The final stages, moving towards a primary concern with chronic disease, once again evince a shift in social patterns. These shifts can be seen when we look at ethnographies of indigenous populations who have recently become Westernised, as well as case studies of areas in which longevity and natural death exists in greater concentration than other areas of the world.

The limitations of a disease transition framework are that it relies on very broad and generalizable historical trends, and is therefore at the mercy of historical records (textual, archaeological, and secondary sources), which, as surviving records become more plentiful, serve to telescope the periods of transition into shorter and shorter time frames. With regard to Native Americans and Australians, however, the disease transition processes set forth above have occurred in an equally telescoped period of time. Thus, ethnographies of Native Americans and Australians are particularly helpful in testing the model of disease transitions, since these populations encountered several disease transitions within the past two or three centuries, and we can see how those populations were able to find new equilibrium points – or continued disruption – during this period of time.

The Pima nation, located in present-day Arizona, has been studied by epidemiologists because of the high rates of type 2 diabetes (around 50% of adults between 30 and 64) emerging in that community since the 1950s. While previous studies had focused on a theoretical ‘thrifty genotype’ assumed among Native populations, Benyshek et al (2001) takes a closer look at the historical emergence of type 2 diabetes among the Pima, and ultimately advances a position which favours the intrauterine environment as a predisposing factor to development of type 2 diabetes later in life. For our purposes, however, the historical element is most relevant.

From at least the 1600s through the nineteenth century, the Pima were an agricultural, settled population. Through contacts with the Spanish in the 17th century, they incorporated wheat into their diet, although they continued to cultivate the indigenous maize, beans and squash which formed the earlier backbone of their diet. They supplemented these foods with cattle, wild game, fish, and foraged plant foods. By the mid-nineteenth century, crops were plentiful enough that the Pima conducted trade with Anglo-Americans settlers. Far from being subsistence farmers, they engaged in “a flourishing commercial agriculture based on sales.” (Benyshek et al 2001:39) While they may have initially been susceptible to the epidemic diseases brought by the Spaniards (Benyshek et al does not examine this), it is obvious that the population had reached some sort of epidemiological equilibrium point by the 1800s. That equilibrium began to break down after 1870, when environmental disruption in the form of droughts and irrigation projects begun by Anglo and Mexican-American settlers led to crop failures by the turn of the century. By the early 20th century, most of the Pima had turned to “woodcutting and wages” to support themselves. Thus, following an environmental disruption, the Pima’s economic equilibrium point also shifted from self-sufficiency to dependence on employment by others. By the interwar period, the Pima were largely employed in government funded water projects. It was during this time, after reliance on locally produced indigenous foods shifted to purchased wheat flour, animal fats, coffee, and sugar, that diabetes first began to be diagnosed in increasing numbers among the Pima. (Benyshek et al 2001:40) Today, with government programmes, Pima continue to find their livelihood through wage earning work, rather than through returning to commercial small-scale agriculture, and their diabetes rates hover around 50% of the adult population.

In contrast to the economic and environmental disruption(s) experienced by the Pima, the Dogrib and Aleut peoples further north were able to maintain a pace of exchange which has allowed them not to suffer from the emergence of type 2 diabetes within their populations. After contact with European traders, the Dogrib “developed what Helm (1981) calls a ‘contact-traditional’ lifeway in which these people supplemented their traditional diet of game and fish by trading furs for food and other goods at trading posts (where they enjoyed credit). Due to their geographic isolation, this adaptation persisted into the 1950s,” after which time they began to be settled into permanent housing by the Canadian government. (Benyshek et al 2001:43) The Dogrib traditional diets had not been disrupted by trade in foodstuffs (e.g. coffee with sugar, butter), but had instead augmented through their incorporation. Even a half-century after settling into permanent communities, the diabetes rate remains low. Unfortunately, Benyshek et al does not detail what type of work is currently practiced by the Dogrib.

A similar pattern of maintained equilibrium was found among the Aleut and Eskimo of Alaska. Like the Dogrib, they also seemed to have developed a contact-traditional way of life, adding traded foodstuffs to their traditional diets of fish and game. Interestingly, “this reliance upon traditional food-getting activities (especially among Alaskan Eskimo) as late as the 1960s and 1970s was often due to high unemployment and poverty rates (Chance 1984).” In other words, unlike the Pima, they did not make a complete shift from traditional means of survival to wage-earning; or rather, they did not lose the environmental resources for maintaining that knowledge. (Benyshek et al notes that the environment had been rich enough to provide for not just Aleut, but also Russian, and Euro-American traders.) By the 1950s, however, local diets had begun to give way to imported foods, including candy and soft drinks. Although diabetes rates have remained low, obesity (based presumably on a Euro-american body type standard), had increased. (Benyshek et al 2001:45).

Benyshek et al argues that the Dogrib and Alaskans did not suffer “from extended periods of chronic protein malnutrition followed by a rapid transition to a Western diet and lifestyle.” (Benyshek et al 2001:45) My argument, however, is that the northern peoples did not experience either the environmental or the economic disruption that the Pima did. Because they were able to preserve an ecological equilibrium status, they have not suffered the development of those chronic degenerative diseases, such as type 2 diabetes, which are increasing throughout the world.

A similar pattern can be found among Aborigines in Australia. O’Dea (1991) notes that prior to westernisation, the Aborigines led a mostly hunter-gatherer, nomadic lifestyle. They were lean and fit, and “had no evidence of the chronic diseases that occur in epidemic proportions in Westernized Aboringinal communities today.” (O’Dea 1991:233) Although O’Dea points to eating behaviour and food preferences as the key component in promoting obesity among the Aborigines today, I would argue that the initial culprit was an economic disequilibrium that began when Aborigines were employed as stockmen in the Australian outback. Rather than receiving wages with which to purchase government provided supplies (in contrast to the shift in Pima work patterns), stockmen were paid in rations of meat, flour, sugar, tea, and tobacco. However, because the focus of O’Dea’s article is on the nutritional-chemical components of a traditional Aboriginal diet, this aspect of economic-dietary disequilibrium is not explored in full. Why did Aborigines work as stockmen? Were they pressed into service, or did they volunteer willingly? Why did they choose to switch to western rations instead of incorporate them into a traditional diet, as did the Dogrib or Aleut?

One of the most telling points O’Dea makes is that when Aborigines revert to their traditional lifestyle, including both diet and the steady exercise exerted by nomadic hunting and gathering, the markers of several degenerative diseases also declined. Excess weight was lost, diabetes abnormalities began to reverse, and major heart disease risk factors were reduced. (O’Dea 1991:79) That traditional lifestyle was the most recent ‘set point’ of ecological equilibrium. I would caution, however, against adding, ‘for them’, and pose instead the question of whether chronic degenerative diseases among industrialised Eurasian populations is not also a result of an ecological disequilibrium present in those societies today. The existence of Blue Zones, areas of the world with high concentrations of longevity, and in which chronic degenerative diseases are not the norm, offers a case of equilibrium points within industrialised societies which were either maintained while the rest of society changed, or which were created in contrast to it.

The Blue Zones of east-central mountains of Sardinia, a community of Seventh-Day Adventists in Los Angeles, and groups on the main island of Okinawa all serve as examples of an industrial-age equilibrium. Mostly untouched by degenerative diseases, these groups of people share four characteristics in common: they practice mild exercise embedded in daily life (kneading dough, walking up and down stairs in multi-level houses), eat plant based diets, have a sense of faith or purpose about their lives, and have close ties to family and friends. (Buettner 2010) These characteristics contrast with popular discourse about the lifestyles found in much of Euro-America, with their disruptive high-stress, cardio-emphasised exercise programmes, go-go-go work styles and consequent fast or quick food diets, and periodic waves of existentialist angst.

Both McMichael and Olshansky note that disease transitions occur as a result of disruptions to previous disease equilibrium states. These disruptions can come about through environmental change, but are equally likely to be tied to social practice (diet, trade, work) and technological development. The experiences of Native Americans and Aboriginal Australians, whose lifeways have been visibly disrupted by ‘Westernisation’ provide evidence that chronic diseases, like the epidemic diseases which preceded them, are also diseases of ecological disequilibrium.

In light of these ethnographies, an interesting health shift seems to occur when societies move from being self-sustaining to being wage earners. The Pima thus stand in stark contrast with the fur trading peoples of Canada and Alaska, who managed to maintain a dual livelihood. This particular insight also raises questions about the first disease transition: rather than look at the rise of agriculture as a turning point, why not examine also the rise in employment by others? It also questions how much current Western society’s primary economic paradigm, namely, a reliance on wage earners, and the role workplace culture plays in promoting health (dis-)equilibrium, which we ultimately see manifesting as chronic degenerative diseases.

The weakness in my argument is that non-infectious diseases have always been present in the population, perhaps previously masked by the visibility of high-mortality infectious disease. However, I would counter by observing that disease transition periods are achieved only after an equilibrium point has been found. Thus, the equilibrium state for chronic degenerative diseases has only been found in a few areas of the world, which are being identified as ‘longevity hot spots’ or ‘blue zones’.

References:
Benyshek, D; Martin, J; and Johnston C, 2001. “A reconsideration of the origins of the type 2 diabetes epidemic among native Americans and the implications for intervention policy.” Medical Anthropology, 20: 1, 25 – 64.

Buettner, 2010. Video lecture at http://www.bluezones.com/about/dan-buettner/

Daar, 2007. “Grand challenges in chronic non-communicable diseases: The top 20 policy and research priorities for conditions such as diabetes, stroke and heart disease.” Nature vol 450:494—496.

McMichael, A. J., Smith, K. R., & Corvalan, C. F. 2000. The sustainability transition: a new
challenge. Bulletin of the World Health Organization, 78, 1067.

McMichael, 2001. “Human Culture, Ecological Change, and Infectious Disease: Are We Experiencing History’s Fourth Great Transition?” Ecosystem Health Vol. 7 No. 2 June 2001.

O’Dea, 1991. “Traditional Diet and Food Preferences of Australian Aboriginal Hunter-Gatherers [and Discussion]” Philosophical Transactions of the Royal Society B. 334:233-241

Nutritional Anthropology 1/3: Optimising Diet

Having finished my degree in medical anthropology, I thought I would post three relatively unedited essays which link nutrition, social structures, and health.  I am not posting the essays in chronological order; rather, I want to present the essays according to a more thematic progression.  The first essay therefore looks at how pre-industrial human societies create their diets not simply on the basis of what foods are local, but also around social structures and alliances.

A !Kung Community in the process of resettlement

(I)
What evidence is there that pre-industrial human societies might naturally optimise their diets to maximise both health and reproduction?

While a certain amount of evidence exists that pre-industrial societies optimise their diets to maximise both health and reproduction, that optimisation is neither spontaneous (i.e. ‘natural’) nor immune to influences from social, market, and power relations. This is particularly the case if ‘pre-industrial societies’ is understood not simply as an academic term for contemporary hunter-gatherer communities, but includes the more colloquial meaning of the pre-seventeenth century world in general. In this essay, I hope to demonstrate that the health benefits experienced by some of these societies, particularly with regard to longevity and causes of mortality, are more closely tied to egalitarian social structures and the freedom to catch or cultivate food resources of one’s own choice.

The Optimal Foraging Model is a model used to assess the cost-benefit ratio of choosing to hunt certain game over others. The theory is that communities will choose those animals which give the best energetic return for caloric investment. As Hawkes et al (1985) note, the model is used to simplify complex data in order to identify “the factors that most significantly shape subsistence behaviour,” but “are not suited to describe the interaction of all, or even a large number of the variables that might affect subsistence-related behaviour.” (Hawkes et al. 1985:401) However, in using the model, Hawkes et al. collected extensive data on the diets of the Ache of Paraguay (Hawkes et al. 1982) and the !Kung of the Kalahari (Hawkes et al.. 1985). Both groups happen to have an egalitarian social structure. A subsequent study by Cordain et al, which drew on the data gathered by Lee (1968) and Hawkes et al (1982, 1985), noted the low rates of cardiovascular diseases among the !Kung and Ache, despite the high percentage of animal based food sources in their diets, an observation which is at odds with the epidemiological data for industrialised countries (Cordain et al 2002). Cordain et al. hoped that by examining the diets of contemporary hunter-gatherer societies, some insight into a purported singular and essentialised ‘paleolithic diet’ could be gained for the purposes of designing therapeutic diets to counteract disease patterns seen in industrialised countries. Although in their concluding remarks Cordain et al noted the diets of modern hunter-gatherer societies “may have operated synergistically with other lifestyle characteristics (more exercise, less stress [sic], and no smoking) to further deter the development of CVD.” (Cordain et al 2002:S49) , they do not go into detail about these potential synergistic effects. In particular, they do not describe how the Ache and !Kung groups experience less stress, leaving one to wonder if their work is underpinned by twenty-first century remnants of a bon sauvage approach.

The Ache and !Kung are not the only groups to have been studied in relation to the ‘diseases of modern life,’ however. Looking at health from the opposite end of the spectrum, the presence of modern diseases among groups which have transitioned from hunter-gatherer food acquisition practices to industrialised wage-dependant diets has been shown in studies of Australian Aborigines (O’Dea 1991) and the Pima nation of the American Southwest (Benyshek 2001). (The Pima, however, were an agricultural people; more on this below.) Both these groups have high percentage incidences of type 2 diabetes, cardiovascular disease, and obesity. In contrast to these two groups, the Aleut, Inuit, and Dogrib, all of whom incorporated ‘western’ foods into their traditional diets without shifting to a western lifestyle or type of work, have managed to stave off the several modern chronic diseases manifested among the Pima and Aborigines (Benyshek 2001, drawing on Helm 1981 and Chance 1984).

The natural conclusion from these studies is that something about the pre-industrial diet exerts a protective health benefit. However, I would venture an additional hypothesis: something about how these societies structured social relations and food acquisition practices were key to their health. To support this claim, I will first look at how the Optimal Foraging Model (OFM) can be applied to agricultural groups, so that we can see how the same predictive factors of nutritional expenditure and input continue to work even after a transition away from foraging. Second, I will assess a critique of the OFM through a study of the Etolo of Papua New Guinea, which demonstrates how social relations can provide the impetus for both hunting and the subsequent distribution of resources. Finally, I will briefly examine three cases, one from Palaeolithic China, the second from present day Kenya, and the third from Medieval Europe, which corroborate how diet is susceptible to social, and ultimately economic, relations.

Keegan (1986) examined whether the Optimal Foraging Model could be applied to the choices horticultural communities make about what crops to grow. Horticultural communities are presented by him as a transitional point between agricultural subsistence and hunter-gatherer modes of food acquisition. As such, they provide an example of a society in the process of selecting which food items, and how much of each, to cultivate and which to disregard. Although their particular environment may never pressure them to adopt agricultural practices exclusively, and in fact may constrain such a development (Keegan 1986:102), it is assumed that most agricultural communities went through a period similar to what modern horticulturalists continue to practice. Keegan particularly looked at the diet of the Machiguenga of Eastern Peru, because the data were available, and compared it with Hawkes et al’s analysis of Ache hunting choices. The results indicated that choices about which food resources to exploit varied by resource availability as compared to all available options, and that the seasonal change in diet reflected which food resources provided the most net gain at that particular time (Keegan 1986:103). The choice of which crops to grow, therefore, was predictable using the OFM.

Postcard of Etoro or Etolo from Papua New Guinea

The Optimal Foraging Model, due to its simplicity, can easily be critiqued. The model examines principally energetic expenditure and returns; it does not take into account what happens to the food after it is obtained, how that food is distributed among the community, nor does it look at the motives which inspire a hunter to search for game on a particular day. Additionally, Robson and Kaplan, though not directly critiquing OFM, note that young hunters are not necessarily the most capable, and expend a greater amount of energy learning their craft with the expectation of higher returns in later years. Therefore, the simple numbers may not be evenly applicable for all hunters. One study which does examine the motives for going hunting and the choice of certain hunting companions over others was conducted by Dwyer among the Etolo of Papua New Guinea (Dwyer 1985). In this particular case, social relations within the community had been disrupted by a series of accidents involving the exchange of marriage-wealth. The resulting conflict pitted several houses against one another. As Keegan summarises,

Three events, or sets of events, appear as primary in motivating m2 to hunt. (i) Gifts of meat his family had received from m3 and f3 encouraged him to reciprocate. (2) The marriage of his half-sister fi encouraged him to make a prestation through her to her (and his) new kin. (3) He had been unsuccessful in an earlier attempt at easing tension between the residents of Houses III and IV and a joint hunting venture would provide another means towards that end. m2, as a key sponsor of fi’s marriage arrangements, and m4, as injured party, were focal to the difficulties that had arisen between the two households.

The choice to go hunting, in this particular instance at least, was therefore susceptible to social relations. The effect social and economic relations have on diet is also seen in other societies, ancient, medieval and modern.

Ancient China provides interesting evidence that pre-industrial health optimisation is negatively impacted by changes in social stratification. Pechenkina et al (2002) examined skeletal remains from two cultures in Neolithic China, the older Yangshao and the more recent Longshan. The Yangshao diet consisted of some agricultural products, notably millet, supplemented by game and fish. Because little evidence of social stratification has been forthcoming, scholars assume the Yangshao lived in an egalitarian society. Aenemia and carious lesions have rarely been found among their remains. The shift from Yangshao to Longshan culture coincided with climatic change, and it may be theorised that this climatic shift forced the evolution of agricultural techniques if the communities were to survive. As Pechenkina et al write, “Further agricultural intensification in response to proposed climate change increased the caloric base, which permitted rapid populatin growth at the Yangshao-Longshan transition. As people aggregated into larger centers, their access to wild food resources became even more limited and their diet narrowed.” (Pechenkina et al 2002:16) The “chiefdom-like society” of the Longshan, although they intensified pre-existing agricultural practices, paradoxically experienced poorer health, as evidenced by osteoarthritis in the jaw, more extensive indications of anemia, reduced stature (particularly evidenced in sexual dimorphism, perhaps indicating females suffered more frequently from nutritional deprivation), and growth disruption lines in tooth enamel, exacerbated by poorer hygiene from crowded living conditions. Interestingly, the Longshan period is also characterised by advances in technology, and is considered to have laid the foundations for the Zhou and Shang dynasties (Pechenkina et al 2002). The two principle changes I note are shifting social structures (egalitarian-chiefdom/ patriarchal) and the reduction of dietary breadth.

While ancient China offers some circumstantial evidence of how social structures affect dietary practices, medieval Europe offers more concrete examples. Pearson, a historian, notes that Charlemagne required a bishop to provide the palace with two cartloads of soft rind cheese per year, after Charlemagne sampled the cheese at the bishop’s table (Pearson 1997:10). This is a clear example of how food functions to preserve or augment social relations. Other examples of how social practices influence dietary choices drawn from medieval history are the rations permitted to monks, nuns, and workers in monastic communities (Pearson 1997:16). Finally, the revenues required by feudal lords of their tenant farmers to their feudal lords supply evidence that what the tenant farmers chose to grow was not always by their own choice (ibid, 20).

Finally, the ethnography by Fujita et al (2003) examines how a shift from pastoralism to sedentarisation causes nutritional deficits along wealth lines. Historical work by Adas, however, indicates pastoralism usually comes about after a shift from HG to agriculture because agricultural societies provide the springboard from which pastoralism is able to develop (Adas, 2001:75). Therefore, the Kenyan example may not be the best example to use. However, it does indicate that the relation of nutrition to social distinction(s) seem to be exacerbated under certain conditions, and that sedentarisation is one of them.

In the context of social inequality and decision making, it is interesting to note studies which have examined the impact of social relations on brain size (Aiello and Wheeler 1995:208). In contrast, Robson and Kaplan (2003; Kaplan and Robson 2002) associated brain size with an increase in intelligence, in their examination of the co-evolution of brain size and longevity. Although their association of brain size with intelligence may be problematic, when their study is combined with Aiello, a triad of relations emerges: social relations foster greater brain size, which is dependant on higher nutrient acquisition, which itself relies on social relations which are themselves benefitted by greater (post-fertility) longevity to allow the acquisition of complex foraging and hunting skills. As noted by Robson and Kaplan, “the economies of hunter-gatherers rely on skill-intensive food production strategies that would not be viable without massive intergenerational resource flows and exceptional adult life expectancy.” Robson and Kaplan also argue that resource flows between generations, because these flows continue even the period of fertility in human women ends, may be more useful than models which simply look at sheer fertility in terms of numbers of offspring, in terms of health optimisation (Robson and Kaplan 2003:157, 164).

If health is measured by an absence of chronic disease markers while alive and its absence as a cause of mortality at death, and fertility is measured not by number of offspring but by that longevity which ensures reproductive success through longer intergenerational resource flows to offspring, then hunter-gatherer communities may not be the most optimised communities to study, despite their low levels of death from neoplasms or chronic diseases (Gurven and Kaplan 2007). Gurven and Kaplan (2007) in their cross-cultural survey of longevity among hunter-gatherer societies note that “the average modal age of adult death for hunter-gatherers is 72 with a range of 68 – 78 years. This range appears to be the closest function equivalent of an ‘adaptive’ human lifespan.” They go on to point out that illnesses such as infectious and gastrointestinal diseases (less than half due to contact-related diseases) account for 70 percent of all deaths in their sample.

The chronic disease-free situation of hunter-gatherers could be fruitfully compared with ‘blue zones’, areas of the world in which a high density of centenarians live disease-free lives. These communities could be seen as continuing the evolutionary path which led to greater longevity in hominids in the first place. In particular, an examination of resource flows through the generations, and an evaluation of the egalitarianism (or lack) experienced in the work and social lives of the centenarians may provide additional insight into those factors which can promote longevity and health in industrialised societies like Okinawa, eastern Sardinia, Ikaria, and Loma Linda in the USA. Future research might ask what other social practices are shared between contemporary hunter-gatherers and blue zone communities.
While some groups seem to optimise health ‘naturally’, the reasons allowing this seem more related to egalitarian social practices than to pre-industrial status as such. As Woodburn (1982) writes about hunter-gatherer societies,

“These societies, which have economies based on immediate rather than delayed return, are assertively egalitarian. Equality is achieved through direct, individual access to resources; through direct, individual access to means of coercion and means of mobility which limit the imposition of control; through procedures which prevent saving and accumulation and impose sharing; through mechanisms which allow goods to circulate without making people dependent upon one another… The value systems of non-competitive, egalitarian hunter-gatherers limit the development of agriculture because rules of sharing restrict the investment and savings necessary for agriculture; they may limit the care provided for the incapacitated because of the controls on dependency; they may in principle, extend equality to all mankind.”
How people eat and manipulate their diets is susceptible to structural impacts. These impacts include social relations, market forces, medicine, social stratification as seen in paleographic and medieval forensic evidence. Bodily investment, mood manipulation, and the codification of medical manipulations of the health through the use of food also impact food choices. Of the pre-industrial societies mentioned above, those which are egalitarian seem better able to cultivate health (as represented by freedom from chronic disease) through diet optimisation. In contrast, hierarchilisation, government directives, and labour commodification appear to lead to ill health through the narrowing of crops produced and the curtailment of foraging practices in the local area. When peoples are left to develop their own hunting and food growing practices, and not interefered with by the demands of a centralised state’s directives to produce certain crops but not others, pre-industrial peoples, greater health seems to result.

 

References

Adas, Michael (2001). Agricultural and Pastoral Societies in the Ancient and Classical World. Philadelphia: Temple University Press.

Aiello, Leslie C., Wheeler, Peter. (1995) The Expensive-Tissue Hypothesis: The Brain and the Digestive System in Human and Primate Evolution.

Benyshek, 2001. “A reconsideration of the origins of the type 2 diabetes epidemic among native Americans and the implications for intervention policy.” Medical Anthropology, 20: 1, 25 – 64.

Cordain, L. et al (2002). The paradoxical nature of hunter-gatherer diets: meat-based, yet non-atherogenic. European Journal of Clinical Nutrition 56: Supplement 1, S1-S11.

Dwyer, Peter D. (1985) A Hunt in New Guinea: Some Difficulties for Optimal Foraging Theory. Man, New Series, Vol 20, No 2, 243—253.

Fujita, Masako, Roth, Eric A., Nathan, Martha A. and Fratkin, Elliot. (2004) Sedentism, seasonality, and economic status: A multivariate analysis of maternal dietary and health statuses between pastoral and agricultural Ariaal and Rendille communities in northern Kenya. American Journal of Physical Anthropology 123:277-291.

Gurven, Michael, Kaplan, Hillard. (2007) Longevity Among Hunter- Gatherers: A Cross-Cultural Examination. Population and Development Review 33(2):321 – 365.

Hawkes et al (1982). Why hunters gather: optimal foraging and the Ache of eastern Paraguay. American Ethnologist 9: 379-398.

Hawkes, Kristen, O’Connell, James F. (1985) Optimal Foraging Models and the Case of the !Kung

Kaplan and Robson (2002). The emergence of humans: The coevolution of intelligence and longevity with intergenerational transfers. Proceedings of the National Academy of Sciences of the United States of America.

Keegan, William F. (1986) The Optimal Foraging Analysis of Horticultural Production. American Anthropologist, New Series, Vol 88, No 1, 92-107.

O’Dea, K. (1984). Marked improvement in carbohydrate and lipid metabolism in diabetic Australian Aborigines after temporary reversion to traditional lifestyle. Diabetes 33: 596-603.

O’Dea, 1991. “Traditional Diet and Food Preferences of Australian Aboriginal Hunter-Gatherers [and Discussion]” Philosophical Transactions of the Royal Society B. 334:233-241

Pearson, Kathy L. (1997) Nutrition and the Early-Medieval Diet. Speculum, vol 72, No 1, 1-32.

Pechenkina, Ekaterina A., Benfer, Robert A. and Zhijun, Wang. (2002) Diet and health changes at the end of the Chinese neolithic: The Yangshao/Longshan transition in Shaanxi province. American Journal of Physical Anthropology 117:15-36

Prasad, C. (1998) Food, mood and health: a neurobiologic outlook. Brazilian Journal of Medical and Biological Research 31:1517-1527.

Robson and Kaplan (2003). The Evolution of Human Life Expectancy and Intelligence in Hunter-Gatherer Economies. The American Economic Review.

Woodburn, James (1982). Egalitarian Societies. Man, New Series, vol 17, No 3, 431 – 451.

Traditional Mongolian Medicine: Three Perspectives

During August break last year, I had the chance to travel to Mongolia, where I had the opportunity to meet with three practitioners of Mongolian Traditional Medicine. MTM can be loosely divided into three distinct styles, based on the type of practitioner: shamans, Buddhist monk-doctors, or “secular” physicians practising in the local, Western-(i.e. Russian-) style hospitals. The three types of practitioner seem to interact only rarely, if at all. The Buddhist monks and the western physicians seem most likely to exchange views and information; neither work with shamans without a certain degree of friction.

Shamanic Medicine.

My first meeting was with Shagdarjav Sukhbat at the Mongolian shamanism “Golomt” Centre. An author of several books on shamanic medicine (none of which have been translated out of Mongolian), he has worked closely with several shaman still found in northwest Mongolia, among the reindeer herding peoples of Hovskol, as well as with Western and Japanese anthropologists. He was knowledgeable about current practices among the shaman, which are invariably influenced by the global transmission of other forms of east Asian medicine, and I was not always able to parse out how long certain practices have been used by the shaman in Mongolia. (Mongolian massage, for example, incorporates techniques from Chinese TuiNa, Shiatsu, and Northern Thai massage, and uses the thumb-point, palm, flat palm, and three finger techniques.) My meeting with him was brief but interesting.

IMG_0423Shamans, he explained to me, can be called for any illness. Their procedure is to take the pulse (of which Mongolians recognise 400 varieties, all similar to the Chinese pulse qualities), analyse the patient’s urine, examine the colour of the face and diagnose via the person’s shen. One can diagnose a child from the mother’s pulse and vice versa, and the pulse can even be used to divine important aspects of their relationship. For example, Shagdarjav told me once, when he was a child, he asked a monk-doctor when his mother would be returning from a journey and the monk determined the time and date from the boy’s pulse.

Urinalysis follows Tibetan practice, and colour, clarity, and smell are all assessed. Healthy urine is a clear, pale yellow, like the colour of butter. (Of course, Mongolia has incredibly fresh and undyed butter…) Diagnosing by facial colour follows the Chinese five element paradigm, with red hues indicative of the heart, yellow of the spleen, and so on. Diagnosing the shen involves looking at the eyes, teeth and tongue colour, the tip of the nose, and around the navel, and can include touch diagnosis. On infants, the ears are also examined.

Despite its similarities to TCM, Mongolian medicine has developed its own unique features due to several factors. The most frequently cited factors were climate and diet: China is a hot country of low elevation, where the people eat spicy food. Mongolia, in contrast, is high altitude, with variable and extreme weather (ranging from freezing temperatures at night to very high temperatures during the day), where the people eat lots of meat (mostly mutton) and are nomadic. Mongolian medicine therefore focuses more on lifestyle adjustments, and features a greater use of moxa, though the shamans seem to favour other modalities.

With its emphasis on climate, it was no surprise to be told that illness comes from three sources: the sky (Tengr), underground, and from the animal/ human world. Weakness in the body is caused by sleeping late (or general laziness), food (especially eating during 4am to noon, which negatively impacts the Stomach), and by sexual desires, which injure the heart when the mind is occupied by them constantly. These diseases are considered part of the human and animal world, and are treated by changing one’s lifestyle.

Those illnesses from the ground are viral in nature, stemming from dirty water that poisons the grasses animals feed on or which contains parasites which multiply in the water people and animals drink. 1, 616 types of illness come from this, of which 404 are genetic (it was not clear if they take advantage of a constitutional weakness or induce genetic changes or act as signal transducers triggering the appearance of illnesses latent in genes); 404 are “hidden,” or more accurately, transient due to a healthy lifestyle; 404 are treatable by a shaman’s soul journey or by mantras, but not by (herbal) pills; 404 can be treated by a shamanic pill. Just as in Tibetan medicine, so also in Mongolia it is said that 12 illnesses will appear in the future and be very dangerous. Sukhbat theorised that AIDS/ HIV, avian flu, and enterovirus could be among those 12 illnesses, but stressed this was just his own idea.

From the sky come the five different types of northern lights, embodying the five colours; these lights have their greatest effect from cock-crow (around 4am) to noon. Ironically, to live healthy, one should wake at 4am and go to bed at sunset – awake with the coming of the universe’s light. (From a TCM perspective, waking activates the wei qi, and thus opening one’s eyes before these lights arrive allows the body to defend itself more adequately, but this reasoning was not offered to me at the time.) Black northern lights signify a viral epidemic. Red indicates blood diseases (epistaxis, blood in stool, hematuria, etc) will appear, while green gives power to viruses in the grasses; after 5-12 years, illness arrives. Yellow light foretells jaundice, possibly with edema; or urine diseases (“gold water” was the phrase that was used, similar it seems to the jinye fluids, jade fluid); or bone diseases. Blue, which relates to the liver (“qing” — the colour related to the liver in TCM – is better translated as “cyan” rather than “green,” and is the colour grass appears from horseback, the colour of perennial herbs as they shoot up in the springtime; thus “blue” is meant in the sense of “qing”), is also the colour of water and good is for water; the blue light cleans and purifies the water, but harms the liver. For the shamans of Mongolia, blue is the colour of the world: grasses, lakes, and sky.

Also mentioned in this context was an offhand comment that white flowers occur in the summer, and they have the greatest impact on the Lungs.

The diseases which come from the sky can be treated in one of five ways, first and foremost by mantras. Either the shaman will prescribe a mantra meditation or the shaman, who has recited a specific mantra 100,000 times, can take a cup of vodka or water, circle it in front of him or herself, blow on it three times, take a mouthful, and blow it as a spray on the patient at Ren 6, GV4, and PC8, using a separate mouthful of liquid for each point. The body is like light, when there are holes, illness – especially those originating from the sky or northern lights – can enter. The vodka spray helps close those holes.

The other methods of treatment are by shamanic soul journey; by magical herbs made into pills; by massage; and by pure water from its source. (Think of the Bach flower remedy, “rock water.”)

Ultimately, from a shamanic point of view, to live healthily, the sky must be clear, the ground (and its water) be clean, and the food pure. These three elements were present in his prescription for losing weight: every day, early in the morning (around 4 or 5am), wake and drink cold water with honey; at sunset drink fresh yogurt.

Buddhist Medicine:

IMG_0448Buddhist doctors are called “Otoch” in Mongolian, and for my second interview I was taken to the Otoch Manramba Institute of Traditional Mongolian Medicine, which functions as both a Buddhist temple, clinic, and college. At the gate of the college is a small pagoda of prayer wheels, and one of the first images that greets the visitor (or patient) walking into the clinic is a huge icon of the Future (Maidara) Buddha; below that icon were statues and thangkas of the Medicine Buddha, more prayer wheels, oil lamps, and (if I recall) also a statue of Tara, the Bodhisattva of compassion. Off to the side of the main hall were two low desks, on which were arranged quills, paper, patient records, herbal powders and a few religious implements. Behind each desk were two colourfully robed Otoch, and in front of them sat several patients, waiting to have their pulses read and medicine dispensed.

Upstairs, I passed by a sunlight, glassed in hall, where piles of wild-harvested herbs were being dried, each in its own carefully labeled pile. In the back offices, which overlook the main hall of the temple, I spoke to one of the directors of the College.

The director explained to me that Tibetan and Mongolian traditional medicine have their roots in Indian Buddhist medicine. She was very careful to emphasise that those roots are not in Ayurveda, which is Hindu, but in the Buddhist medicine of South Asia. Unique features of Mongolian Buddhist medicine include the ingredients in patent medicines, which are wild-harvested from the untouched Mongolian countryside (the Mongolians all seemed to be very proud of the pristine nature of their land), massage, and a number of other differences “too many to enumerate.”

Moxa is an important aspect of Mongolian Medicine, since Mongolia has a very cold climate; I recalled that the Su Wen notes moxibustion therapy originating in the North. Instead of mugwort, edelweiss forms the primary herb, although additional herbs (including woods, metals, wool, and oils) might be added to alter its therapeutic uses. (Edelweiss was also used as firestarter or kindling in the countryside.) Moxa is an ideal medium to treat the most commonly seen diseases in Mongolia, which include cold in the Kidneys, Heart, and Stomach.

Acupuncture is also used, primarily based on Chinese models, although certain extra points are also included – but when I asked to be shown an extra point, I was told we were not in a class.

The college itself currently enrolls 216 students for 6 years of study. 70% are women, and only 5 of the total student body are monks. Ordinarily, monks study only Buddhist philosophy and chanting, and doctors study only the Buddhist medical tradition; but a few study both – in this case, only five. All students begin the day with the meditation to Medicine Buddha, which is very similar to the sutra recited in Traditional Tibetan Medicine.

The school’s curriculum devotes 70% of its time to traditional medicine and 30% to western medicine (morphology, pharmacology, physiology). The school year lasts from 1 September to 1 July, during the two month break, students practice herbal medicine, nursing, or assist doctors, especially students in their 2nd and 4th years. Classes begin at 8.30 and finish by 3.30 in the afternoon, but both students and teachers are in school until 8pm or longer, to continually review what they know and are learn through repetition. Courses include Latin language, English language, traditional Mongolian script, Tibetan language (Mongol doctors in the past wrote their books on meditation and medicine in the Tibetan language, which was the scholarly Buddhsit language at the time. Many of these works are only now being translated into Mongolian. As a side note, the largest collection of Buddhist scriptures is, in fact, in the National Library in Ulaan Baatar). Social medicine, medical ethics, medical philosophy, computers, Buddhist philosophy, Medical Astrology, Traditional Morphology, Western Morphology, traditional physiology, western physiology; febrile diseases, pediatrics, genetics, Chinese acupuncture, surgery, plant physiology, and pharmacology are among the other courses students are expected to master. Herbal medicine is learned by studying formulas only; the functions of single herbs are learned in context. Formulas are learned according to the diseases they treat; thus a course in febrile diseases or cold diseases would feature formulas from (if this were a Chinese formulary) the Warm Disease School or the Shang Han Lun.

The most important thing to know about Mongolian Buddhist medicine, however, is that one must do the meditation to the Medicine Buddha every day. Meditation on pulse diagnosis must also be done every day (exactly what this entails was not explained to me)., but traditional diagnosis is not only by theory and practice (as with the pulses), but also by meditation. (Although the pulse is very important, I was told.) Why is meditation so important? Because at its most basic level, disease is suffering. Suffering is from bad karma. Traditional doctors must have good karma and good energy and a good mentality if they are to treat disease. To achieve this state of purity, one must do meditation every day; otherwise, when the doctor sees the patients he (or she) won’t cure the disease. (Which might explain the failure of western medicine to heal some patients). The students at the college, the director exhorted me, go every morning and evening into the countryside to perform the meditation to the Medicine Buddha.

Secular Medicine:

To interview a doctor practicing traditional secular medicine, I was taken to the acupuncture and massage therapy wing of the main Hospital in Ulaan Baatar. The woman I interviewed had first been an gastroenterologist (studying in Russia), then specialised in Pediatrics, and only later went into acupuncture and medical massage, which she has been doing for the past ten years.

The medicine practiced at the hospital is not connected at all with shamanism, but does share roots with the Buddhist medicine taught at the Otoch college, and like the other two, makes use of special Mongol techniques. However, whereas the Otoch rely first on the pulse and then on herbs; here the physicians do not work with herbs, since being part of a larger institution patients can be directed to other specialists. The focus therefore is first on acupuncture, second on cupping, third on moxa, then bloodletting, and finally on massage. Acupuncture is used to reconnect the qi when its flow has been interrupted for some reason, and 26 – 28 gauge needs seemed to be the normal size of the needles used. Mongolian cupping is different from current Chinese practice, (i.e. suction, which the Mongolian doctors believed was not as effective), and the doctors in Mongolia use full fire, lighting strips of newsprint and then putting the four-inch cup on the person with the flame still burning inside. Bloodletting is typically viewed as emergency medicine, although it is used to tonify qi.

I was able to participate in the intake of one patient, a man in his late 40s or early 50s, who recently had surgery on his right ear. Subsequently, he developed Bell’s palsy on the right side of his face. His left lower leg trembled constantly, and the skin on the left side of face appeared scarred from burns. His pulses were wiry and full on left, thin and deficient on the right. LU and SP positions were weakest. Patients bring their medical files with them, including x-rays, in a clear plastic folder, and we were able to also see which cervical vertebrae had also moved out of place. Treatment was cupping for 10 minutes on DaZhu, Shenshu, Feishu, Jianshu – three cups across at the level of DaZhu, bilaterally. This was followed by needling ST2, 3, 4, 5; LI 4, 11, 19 (or 20); GB14 or YuYao, GB2, 3; Ren 24, 19, xiyan, SP10, ST36, and SP9.

The pulse is taken more proximal and medial than the Chinese pulse; in fact, the first position is below where we take the third position, pushing up against the wrist flexor muscles. (The doctor actually moved my hand from the Japanese position to place it in the correct Mongolian position. I explained to her why I was taking the pulse so distally, and she then had to explain to the patient the discrepency in pulse taking styles.)

Two techniques are used in Mongolian acupuncture, the doctor told me. First, from the beginning the teacher teaches students point location exactly; then together, they check the patient and determine the appropriate points to needle. Points are named according to their Chinese names, which are left untranslated. Everything is taught altogether within one year. Although thick gauge needles are used, the patients do not make any noises of complaint, and the previously mentioned patient only flinched when LI20, and ST2 were needled. Needles are kept in test tubes, but only used once (imported from China and Korea).

Root treatments are done on the first three days, and then branch or local treatments subsequently. Point energetics rather than meridian therapy is used.

The most important thing in this style of medicine, I was told, is diagnosis. Proper diagnosis and then practice. (I was also informed that to become an excellent doctor one should study first in a European university.)