British Science week: 3
All the biological sciences, wonderful though they are, are in a primitive state – and, relative to the intricacies of Nature, they always will be. In the end, for a whole hierarchy of reasons, Nature is beyond our ken. Thus in the all-important science of human nutrition it is still impossible to improve very much on the common-sense advice – to eat “plenty of plants, not much meat, and maximum variety”. As for livestock – the general notion that their diets should approximate as closely as possible to what they would eat in the wild is probably as good as it will ever get. In both cases, the underlying biological theory that really counts is not that of nutrition, or of biochemistry, but of evolution. All of us, whether we have two legs or four, will surely do best on the kinds of foods that our ancestors evolved to cope with and make use of.
Notably, we have hardly begun even to recognize what is surely a vast array of micro-micro nutrients: agents (usually organic) present in natural food in quantities sometimes too minute to measure, and far too numerous to analyse exhaustively, that nonetheless have enormous effects on all our lives. These micro-micro agents act half as nutrients, and half as drugs – not exactly nourishing us, but in a thousand different ways (where “a thousand” is very definitely an underestimate) lubricating the wheels. Already they are of commercial significance. In pharmaceutical circles they are known as “nutraceuticals” – including the plant sterols that seem to lower blood cholesterol. The food industry prefers to call them “functional foods”, notably including an array of live yoghurts. Some years ago I coined the expression “meta-nutrients” but now I prefer to call them “cryptonutrients”.
So what’s the truth of the matter? Do terms like “nutraceutical” or “functional food” really mean anything substantive? Should we recognize “cryptonutrients” as a discrete category of foods, to rank alongside the known shortlist of vitamins and essential minerals, and the three classes of macronutrient (carbohydrate, protein, fat)? If so, what exactly are cryptonutrients? Where do they come from? How come there is such a thing?
For my part, I reckon that cryptonutrients is or are a hugely important concept with enormous explanatory power and practical significance, both for human and animal health. So it is often claimed for example that organically grown crops not only taste better than those that are fertilized and protected by factory-made chemical agents but also have added health benefits – for which mainstream nutritionists can find no evidence. Perhaps the point is that the organically grown crops are better for us because they contain more cryptonutrients – which the conventional nutritionists do not detect because they are not looking for them; and/or if they did detect them, they would not appreciate their significance. I wrote this piece some time ago — it is based on a lecture I gave in 1999 at the Royal Society no less, at the Caroline Walker Trust award ceremony (where it won an award). But I reckon it is still highly pertinent.
What exactly are “cryptonutrients”?
“Cryptonutrients” (including “functional foods” aka “nutraceuticals”) form a notional category of foods that provide various agents that are presumed to bring some pharmacological benefit, independent of or in addition to their value as a source of nutrients or of energy. They are halfway between drugs and nutrients.
But such a notion evokes incredulity. It sounds like yet another foray into the netherworld of mumbo-jumbo and snake-oil. Quite simply, the idea does not seem to make sense. We know that our bodies need proteins and fats (not least because they are largely made of proteins and fats) plus carbohydrate that in part is structural but is mainly burnt for energy, plus a mixed bag of “vitamins” and minerals that seem to encompass about half the periodic table. But why should we need any more than that, unless we happen to be ill, and require some specific drug as an antidote? Scientists always insist on evidence, but you don’t find evidence unless you look for it, and there is no point in looking for evidence unless the hypothesis you want to follow up seems likely to lead somewhere. No-one in their right mind would bother to gather evidence for an idea that doesn’t seem to make sense in the first place; and if serious biologists feel that functional foods really are in the snake-oil category, then they will shake their heads and keep well away.
But I reckon that when we apply a little evolutionary thinking, we find that the idea behind “functional foods” makes perfect sense. There is very good reason to suppose that our bodies and minds might indeed benefit from a whole range of materials — found in flowering plants, fungi, and microbes — that do not fall easily into the conventional categories of protein, fat, carbohydrate, and “vitamin”. The great Russian-American evolutionary biologist Theodosius Dobzhansky commented in 1973 — it was the title of an essay — that “Nothing makes sense except in the light of evolution”; and this adage applies abundantly to nutritional theory in general, and applies absolutely to the specific category of “functional foods”. Until we apply evolutionary thinking, the idea that we might derive some specific benefit from the many odd and various agents contained in various foods makes no sense at all. But when we do apply evolutionary thinking — about the modus operandi of natural selection in general, and specifically the evolutionary history of Homo sapiens — we find that the idea of functional food makes very good sense. In fact, we begin to see that the implications might spread through all of medicine and into agriculture and cookery, and indeed could influence a great deal more besides — including our attitude to what are now called ‘hard drugs’.
The notion of Co-adaptation
The first premise is that human bodies have been co-evolving alongside other organisms, and adapting to their presence, since long before our ancestors were human. Indeed we should see our physiology — and minds! — as the outcome of three and a half billion years of adaptation to life on Earth, which includes co-adaptation with other organisms. That is, our bodies do not simply reflect the fact that we choose to call ourselves “human”. Before we were human we were non-human primates; before we were primates we were non-human mammals; before that we were non-mammalian vertebrates; and before that, we were non-vertebrate animals; and so on all the way back to the first fairly random collection of molecules that came together to form the first rudiments of life. Indeed, in our genes, we can still see the tracks of ancestors that were still microbial – not literally like the modern prokaryotes (bacteria and archaeans) but comparable with the existing types in their grade of organisation. We carry all this genetic and physiological baggage around with us.
Specifically, and most importantly — over the past billion years or so we and our ancestors co-evolved in the presence of plants, fungi, and prokaryotes; and for the past 150 million years ago, the prevailing plants have been flowering plants, alias angiosperms. Many of those organisms produce agents for their own benefit that we, as consumers, simply cash in on. Thus we make use of the proteins, fats, and carbohydrates that plants produce in seeds for the benefit of their own embryos. Some vitamins, too, were clearly produced originally by plants for their own direct benefit — probably including ascorbic acid, which is one of nature’s principal anti-oxidants. Most animals make their own ascorbic acid but we, human beings, do not, and we need to acquire it ready made from our food — which for most people means from plants. Then we call it vitamin C. (I say “most people” because, apparently, traditional Inuit living traditional lives got much or most of their vitamin C from fresh seal meat).
But plants, fungi, and bacteria also produce a range of materials that are specifically intended to be toxins — agents intended to prevent us and other animals from eating them. Some of these toxins are merely repellent, like tannins; and others are frankly poisonous. I suggest that the animals in turn have evolved mechanisms that help them — including us — not merely to cope with the toxins but eventually to make use of them. Indeed, that those erstwhile toxins now form at least part of the miscellaneous list of agents that we now acknowledge as “vitamins”, and — the thesis of this talk — that they include the materials that I like to call cryptonutrients.
So how does a chemical agent that was originally produced to poison would-be consumers including us evolve to become an important component of their diet? The point is most clearly illustrated by reference not to a plant toxin but to the gas that every schoolchild knows is essential for minute-by-minute survival: oxygen.
How we came to need oxygen
When our ancestors were still microbes, about two billion years ago, they were suddenly subjected to the rudest possible shock. The atmosphere that previously had contained benign and easy-to-cope-with gases like ammonia and carbon dioxide, and probably hydrogen cyanide, suddenly began to be laced with oxygen. It was produced by the bacteria that had invented a new form of photosynthesis: roughly the same variety of photosynthesis that is still practiced by cyanobacteria (and many other bacteria) and by seaweeds and plants. Oxygen is extremely lively stuff, and very difficult for living organisms to handle. There would be virtually no free oxygen in the atmosphere if it were not for the photosynthetic plants and bacteria that pump it out in such vast amounts. “Free” oxygen gas would simply react with other gases and with the continental rocks and so effectively disappear.
So what did the ancient organisms do when first assailed with oxygen? Many simply went extinct. Others, or their descendants — the kind now known as “anaerobes” — still exist in marshes and hot springs and other sequestered places. But they are killed if exposed to oxygen. Some, however, developed mechanisms for coping with oxygen: essentially, detoxifying mechanisms. Some of these latter organisms (or their descendants) remain with us as “microaerophiles”. They are not poisoned by small amounts of oxygen, because they can detoxify it. They put molecules in its way which are then oxidised away harmlessly.
But the organisms that became the most successful, and went on to inherit the Earth, are the aerobes. They went beyond mere detoxification. For evolution is wonderfully opportunist. Natural selection can make use only of structures, mechanisms, and behaviours that already exist; and (there are thousands of examples) it spectacularly does make use of what they are confronted with, and convert mechanisms that evolved in the face of one particular set of pressures into mechanisms that come to serve quite different functions. The way that superfluous jaw-bones of synapsid reptiles evolved into the middle-ear bones of mammals is a classic example. So once natural selection produced a mechanism that could cope with rogue oxygen, it was always likely to go one step further and turn the detoxification process to more constructive use. Thus one way to detoxify oxygen is to sacrifice a few surplus carbohydrates, which in creatures like us mainly means sugars: allow the oxygen to react with them, so that they oxidise away into relatively innocuous carbon dioxide (which in simple organisms then dissolves away in the surroundings) and a little water. This detracts the oxygen away from more essential components of the body — notably the structural fats and proteins, which are extremely vulnerable (especially the fats). But the oxidation of sugars releases energy – and the aerobes — including our own distant ancestors — evolved means to harness that energy. Thus the essentially negative process of detoxification has evolved to become the Earth’s most efficient mechanism of respiration, for aerobic respiration releases more energy per gram of fuel — where fuel means sugar — than any of the various anaerobic mechanisms.
Yet aerobic organisms like us that actually need oxygen to survive also possess various vitamins whose function is to cope with rogue oxygen; and they acquire other de-oxidising materials ready-made in the form of vitamins. Vitamin C is one. So too evidently are folic acid and uric acid. Most animals make their own vitamin C. Humans are among the few who need to acquire it ready-made.
So it was that oxygen was first produced as ‘twere by accident — a by-product of photosynthesis; and when it first appeared in the atmosphere, it surely was the mother of all pollutants. But then some creatures evolved the means to cope with it and others went one step further and began to use the erstwhile toxin in a much-improved form of respiration.
And, I suggest, the same general kind of co-adaptive processes that enabled our own ancestors to evolve the mechanisms of aerobic respiration out of mechanisms that originally developed to detoxify oxygen, can be seen in the ways in which we cope with other toxins. Agents that plants, fungi, and bacteria produced as noxious by-products — or specifically to stop us from eating them — have now become important or essential to us. First we evolved the means to detoxify those noxious agents and then — out of the detoxification mechanisms and the toxins themselves — we evolved mechanisms that have become an essential, or at least highly desirable, component of our physiology.
The de-tox principle in action
Most leaves of wild plants are toxic, at least to some extent. They just don’t want to be eaten. The animals that manage to live on the leaves of tropical trees commonly possess spectacular detoxifying mechanisms, including the South American hoatzins, the leaf-eating monkeys and the koala, which lives exclusively on eucalyptus leaves which are pure poison, apart from the fibre which is like barbed wire. The livers of dogs are also remarkably good at detoxifying the many foul agents produced by bacteria. They cope marvellously with putrescence.
More broadly, we should envisage an arms race between the plants and fungi that want to avoid being eaten, and the animals — including us — that want to eat them. Arms-race is an extremely important concept in evolution. But of course, the arms race is not finished. Arms-races never are. Some of the agents produced by plants, fungi, and microbes still poison us. Many plants, fungi, and microbes are notoriously toxic, from deadly nightshade through honey agaric to Clostridium botulinum. In these instances the plants et al are winning. Some such agents are not particularly good for us, but we or our pre-human ancestors have evolved means to cope with them. In such cases, the arms-race has reached a stalemate.
But, I suggest, we have adapted to a great many of these once toxic agents in the same way we have adapted to oxygen. We have moved a step beyond mere tolerance. We have put the erstwhile toxins to good use and now we positively need the things that the plants once produced to keep us at arm’s length. Some at least of the miscellaneous class of materials known as ‘vitamins’, I suggest, are precisely of this kind. They are materials produced by plants for their own purposes (probably including the repulsion of animals) which animals first had to evolve means of coping with — and then came to rely upon. Why not? These chemically complex agents have all kinds of possibilities. They might in principle be put to all kinds of uses apart from the ones that the plant (or fungus or microbe) first produced them for. It really would not be surprising if the creatures that ate the plants did not explore some of these possibilities.
By the same token, the rag-bag of agents known as vitamins make no sense at all unless and until we envisage the scenario of co-adaptation and of arms-race. Vitamins demonstrate Dobzhansky’s point exactly: “Nothing in biology makes sense except in the light of evolution”. The only conceptual difference between vitamins and cryptonutrients is that vitamins really are vital. If any one of them is missing or deficient then things start to go seriously wrong and may indeed be fatal. Cryptonutrients I suggest are like quasi-vitamins. Our bodies and perhaps our minds are compromised if they are not there but their influence is subtle. So it is easy to see when people are short of vitamin C — they are irritable and tired and their gums swell and bleed and eventually they die of scurvy. Cryptonutrient deficiency does not have such dramatic effects and so is far harder to diagnose but it is compromising nonetheless.
In short, if we want to realise our full potential — our genetic potential — then we might well be advised to eat strange things – things that contain vitamins or recondite minerals or cryptonutrients. Physiologists these past 100 years or so have conscientiously and very cleverly observed that human beings do need a range of vitamins, and have asked how we can best obtain them. But I have never seen a serious discussion in a book of nutrition or physiology as to why we need vitamins in the first place — why we should have become hooked on particular recondite products from what are sometimes rather recondite organisms. It simply does not make sense: not until we start looking at the generalities of evolution, and the particularities of human evolution.
“Pharmacological Impoverishment”
If this notion is correct — that human beings have evolved a need for a host of recondite plant and microbial products, just as we have evolved a need for oxygen — then the implications could be huge. For it is extremely unlikely that biologists, through their efforts this past 200 years (which is roughly how long the science of biochemistry has existed), have identified all the agents that plants, fungi, and microbes produce, to which we have adapted; the ones that poison us, the ones that we can cope with, and the ones for which the various mechanisms of our bodies have acquired a definite need.
The agents that would be most difficult to pin down are those that are neither outstandingly toxic, nor absolutely vital for survival. These are the agents that might do us good — that is, are required for optimal functioning — but we can nonetheless get along without. It is difficult to identify any such agent because the effects are liable to be subtle, and variable. It is made doubly difficult by “the cocktail effect”: physiologically potent agents acting in combinations do not have the same effect as they would on their own. It would be extremely difficult to identify all of them because the list is liable to be long and complicated, and there hasn’t been time yet.
But also — and even more importantly — very few people have looked methodically for such agents for the simple reason that it did not seem worthwhile. Scientists pursue specific hypotheses; and the particular hypothesis which says that such agents ought to exist, and ought to be worth seeking out, has not to my knowledge been proposed in a plausible form. That is, it has not been spelled out in evolutionary terms. Instead, modern nutritional theory has grown up with the idea that there are five main categories of food that human beings have to eat: carbohydrates, proteins, fats, the rag-bag of vitamins, and various minerals. Anything that does not fall obviously into those categories has been assumed effectively to be ‘non-food’ — although some of the agents in the ‘non-food’ category have been shown to have specific pharmacological action, and these have been categorised variously as toxins or drugs (with considerable overlap between the two).
In general — as usual — westerners have taken an ambiguous and indeed contradictory view of these non-foods. On the one hand, Puritanism has prevailed: the notion that unless the things that we ingest fall neatly into the carbohydrate-fat-protein-vitamin-mineral category, then we should probably do without them. In such a puritanical vein western law in general forbids all psychotropic agents — marijuana, opiates, cocaine, mescaline etc — apart from those that slipped through the net before the law got its act in gear, such as alcohol, caffeine, and nicotine (though they have all been banned at various times by various cults and societies).
On the other hand, we add — cavalierly I would say — a huge range of chemical agents to food that are loosely categorised as “permitted additives”, which are intended to compensate for the fact that food manufacturers have managed to produce food without flavour, and to make it easier to distribute that food to the far corners of the globe. In other words, on the one hand we have knee-jerk puritanism — the assumption that bodies are better off when they are not assailed with materials that do not readily fall into the conventional categories of “food”; but on the other we give commercial organizations carte-blanche to lace our food with whatever they choose, provided it has not been shown to cause cancer in mice. This is the age of science, but societies in general are led by commerce.
But I will leave others to talk about additives. I do want instead to explore the idea that we actually have some need for, or would in subtle ways benefit from: a whole range of chemical agents, produced by plants, fungi, and microbes, that so far are simply unexplored. Our present diet is liable to be deficient in such agents for a range of reasons:
1: Our present diet is based on a narrow range of crops (and livestock), evolved and developed from the kinds that our Upper Palaeolithic and Neolithic ancestors happened to have available. Hunting-gathering people have commonly been found to make regular use of scores of different wild plants from a wide range of plant families (and different plant families tend to be pharmacologically distinct; chemistry runs in families) whereas the range of crops regularly eaten by most people in the western world is rather small.
2: The post-Neolithic diet is largely grain-based; and grains, being the seeds of grasses, are pharmacologically rather bland. Contrast the seeds of, say, legumes (peas, beans, and so on) or umbellifers which include the seeds of sweet cicely, caraway, and celery – plus of course the roots of carrots and parsnips and the rest, and the stems of angelica. (In fact the erstwhile family Leguminosae is now called Fabaceae and the former Umbelliferae is now called Apiaceae but the old-style vernacular, including “legumes” and “umbellifers”, lives on).
3: Modern crops have been conscientiously selected and bred, over many centuries, largely for yield and for lack of toxicity. Yield is itself largely incompatible with pharmacological variety, since it takes a great deal of energy to produce secondary metabolites (alkaloids etc), and in high-yield crops this energy is diverted into starch and cellulose (which are the principal source of dry mass: ie, effective yield). Lack of toxicity is obvious, and necessary: eg the wild ancestors of today’s solanaceous crops (tomatoes, potatoes, capsicums) and todays leguminous crops were often highly toxic; and wild parsnip too is vicious stuff, like many a wild umbellifer. And so on. But in reducing toxicity we also reduce the pharmacological variety and impact in general. The items in the super-market seem extremely varied but they are all based on a rather narrow range of plants, each of which (apart from the herbs and spices) is pharmacologically blander than its wild counterpart. Variety of brand-name does not imply variety of underlying chemistry.
Putting all the above thoughts together, I reckon (ie, it is at least a worthwhile hypothesis) that human beings and our pet animals and farm livestock might well be suffering from pharmacological impoverishment (an expression to which I lay claim). Our puritanism tells us that if what we eat does not fall into the easy categories of carbohydrate-fat-protein-vitamins, then it is ipso facto bad (permitted additives aside, of course!). But there is no reason to assume that knee-jerk puritanism is intelligent; that it is truly based upon understanding. Perhaps, instead, our puritanism is depriving us of a whole range of materials to which our bodies have adapted over the past three billion years: the results of co-evolution between our ancestors and the creatures they grew up amongst.
Suppose, now, that this general hypothesis is right: that the many mechanisms of our bodies are indeed adapted to a range — possibly a huge range — of chemical agents produced by plants, fungi, and microbes that it can survive without (ie, we are not talking simply about the recognised vitamins) but which it nonetheless would benefit from. Suppose it is the case too — and it undoubtedly is the case — that science has hardly begun to explore this range of agents, at least not in an orderly way, partly because formal investigation would be extremely difficult and often would lead no-where, but largely too because it never occurred to anyone to look. After all, this hypothesis springs from evolutionary biology; and biochemists and nutritionists are not, for the most part, evolutionary theorists. If the notion of pharmacological impoverishment is broadly correct, what would be the implications?
Drugs, cows, and the concept of metanutrition
The implication that I happened to think of first is in the field of psychotropic agents: marijuana, opiates, and the rest. In puritanical vein — that is, largely for historical reasons — we assume such agents must be bad for us. Puritanism is an emotional — a moral — stance, but the question of whether drugs are good or bad for us is, in the end, a matter for science. In this instance, however, the emotion leads. Because our moral history tells us that drugs are bad, we do not even look to see whether and to what extent that is the case, and whether in fact the yearning for drugs that some people develop is a manifestation of pharmacological impoverishment. Ie, the specific hypothesis is that our nervous systems evolved in the presence of a range of peculiar materials produced by plants, and function better in their presence — just as our bodies in general have evolved to function best in the presence of oxygen. Our nervous systems are now deprived of many essential or quasi-essential agents, largely because our diet is no longer based on a range of wild plants and we make less use of fermented foods.
If we followed this hypothesis through, then instead of waging a “war against drugs” — a war which, of course, is already lost, and serves only to ensure that the Mafiosi are among the world’s richest businessmen — then we might begin to gather some sound data about the drugs that are now banned out of hand; and on the basis of that data set the pharmacologists to work, to see if they can tweak the molecules, and produce agents that truly benefit our nervous systems, with a minimum of undesirable side effects. I don’t want to pursue this issue in this context. I just want to point out that our present attitude to psychotropic drugs is at the very least simplistic; that it takes little or no account of what could be the important biological realities; and that policies based on such naivety seem bound to fail, as indeed they are doing, in spectacular style.
More broadly, I suggest that this broad, evolutionary idea of pharmacological impoverishment might lead us to feel that present-day nutritional science, taken in the round, should be seen simply as a first approximation. Yes, it was extremely clever to perceive that human beings (and other animals) need carbohydrates, fats, proteins, and a rag-bag of vitamins, and this wonderful insight has been extremely fruitful. But life is more complicated than that — precisely because it is evolved; and evolved systems are full of loose ends which would not be the case if they had been designed by engineers from Sony or Ferrari. Of course, living systems are far more intricate and wonderful than anything that any human designer could produce, but because they are evolved, they are also quirky. You cannot second-guess nature. You cannot assume that nature necessarily does the things that human beings might, with their designer’s hats on, think are logically necessary. You have to speculate — ask what might be going on — but you also have to do the natural history: observe what is actually going on. Observe and admire. But don’t second-guess. And don’t presume to override nature with adages based on 17th century puritanism and on 19th and 20th century biochemistry. That just won’t do.
In fact I want to suggest that beyond the first approximation of 20th century nutritional theory, we should envisage what might be called “meta-nutrition”: a far more thorough and exhaustive exploration of how the body actually works, and how it actually interacts with all the many thousands of agents that we put down our throats in the form of food and drink — and indeed of all the consequences that might ensue from not consuming all the things we actually need; all the consequences, that is, of pharmacological impoverishment. Perhaps it would be premature to set up a formal university Department of Metanutrition, although if anybody wanted to do so I would be happy to cheer them on. But in the first instance, I do think that an evolutionary approach to nutrition could well help to make sense of a whole range of anecdotes and philosophies and odd pieces of research that now are somewhat in the air; and it is easy to think up a whole range of research projects — though no easy ones! — that ought to be carried out.
For example, it has long seemed to me that animals in general in the wild probably make far more use of fermented food than seems generally appreciated. Michael Crawford at the Zoological Society of London suggested years ago that lions, when they first make a kill, attack the guts and offals of their prey — guts that are far richer, pharmacologically, than the red meat of the muscles. Is this really so? If it is, what do they get out of it?
Similarly, I suggest that that animals in in a state of nature general make far more use of natural fermentations than we are generally aware of. So we ply our dogs with Pal and Pedigree chum, hygienic and carefully balanced, but when I had a dog I found that if you gave him a bone he almost invariably, at some stage, buried it. Was this simply a form of storage? I suspect not. I suspect that fermented bones, exposed to the microbes of the soil, contain a range of recondite agents – including cryptonutrients — that fresh meat does not. Insects in autumn often look dozy not because it’s cool (autumn can be very warm) but, perhaps, because they are drunk on the fumes of rotting fruit. Indeed, animals of all kinds seem quickly to develop a liking for alcohol – perhaps because it is, in truth, at least in modest quantities, a cryptonutrient, which at least makes us (and presumably other animals) feel better, at least in the short term and at least in modest quantities may be good for us. So it seems that laboratory rats prefer their water with a drop of vodka, and elephants notoriously raid the factories where forest people, out of the gaze of the authorities, attempt to enhance their lives with little illicit poteen. Elephants easily become alcoholics. Incidentally, too, I was told in India by a reliable source that Asian elephants commonly eat carrion. They even dig up the bodies of dead people. To me this suggests, not ghoulishness, but a predilection for concentrated food in general — it saves eating all that vegetation! — and especially for fermented foods, with all their pharmacological richness.
You might argue, of course, that cows and other grazing ruminants clearly do make use of fermentation, and that — as grazers — they mainly eat grass, by definition. Grass is pharmacologically fairly innocent, as leaves go. Most leaves of most flowering plants are toxic, but grass leaves on the whole are not. Yet grass is among the most successful of modern plants. How come? Well, broadleaf (dicot) flowering plants carry their growing tissue at the tips so when they are grazed or browsed they lose the bit that really matters. But monocots like grasses grow from the base so that when the tops are eaten the growing issue stays safe, close to the ground. So grass can allow animals to eat it with impunity (provided they don’t overgraze). In fact, if grass is ungrazed then in warm, well-watered places it tends to give way to woodland and in cold wet places it is replaced by sphagnum moss. In fact, over the past 50 million years or so since grass first appeared — and particularly since the Miocene, which began around 25 million years ago — grass and the grazing animals have co-evolved, each needing the other; and a brilliantly successful co-evolution it has been, too. The only proviso that grass makes is to fill its leaves with silica, which regulates the rate of grazing. There are limits, even for grass. Modern domestic grasses, custom-bred to raise cattle quickly, contain much less silica.
But, you might say, if the recondite pharmacological agents produced by plants are so important to the animals that eat them — undoubtedly abetted by the products of fermentation — how come some of the most successful animals on Earth (the ruminants and other grazers) subsist mainly on grass, which is the pharmacologically the blandest form of wild vegetation? Answer: they don’t. Ruminants have huge stomachs known as rumens which are stuffed with bacteria and protozoans, which ferment the vegetation. They eat grass, but what they actually absorb — what nourishes them — is purely the product of microbial fermentation. Hind-gut digesters, like elephants, horses, rabbits, and geese, do a similar job in their colons. It’s because of this propensity that it is, in fact, possible to nourish cattle (up to a point) on human sewage. Aesthetically of course this is vile, and it shows contempt for the animals, but it works because the microbes of the rumen are able to steal the nitrogen from the effluent and use it to make amino acids and nucleic acids. It is conventional and acceptable practice to feed cattle on straw supplemented with urea.
Many animals, too, naturally practice coprophagia. Many, like rabbits and gorillas, re-cycle their own dung. Gorillas can be seen to do this in zoos, but they also do it in the wild. This behaviour is not innately aberrant. There is specific evidence that they need the vitamin B12, produced by gut bacteria. Rabbits also recycle. Dogs eat the dung of other species — one reason that human beings have encouraged them to hang around their camps over the past 100,000 years or so. They keep the place clean. Relatively speaking. My point is, though, that ruminants and other herbivores, even more than us, rely on the products of microbial action; and I do not believe that the chemistry of that action has been exhaustively analysed, nor the effects of that chemistry upon the animal. Here is a rich field for investigation (which is being pursued not least by Prof Tim Spector at King’s College, London).
In comparable vein, Darwin suggested in a general way that humans do nothing that is not to some extent precedented by other animals; and I would find it very surprising indeed if the human predilection for fermented foods — alcohol, cheese and other fermented milks, and pickles of all kinds, sweet and savoury — did not have deep evolutionary roots, and did not to a significant extent reflect nutritional need. It would simply be extremely unlikely that human beings would have evolved such a predilection de novo over the past two million years or so of specifically human evolution. I think we like chutney in the same way and for the same general reasons that wasps get drunk and dogs like their bones well buried. Our bodies need the products of microbes, and our minds know deep down that this is the case.
In this context, too, we might look again at the many anecdotes which suggest that animals, in the wild, self-dose: that when they are feeling poorly, they seek out herbs. For example, many zoos and some farms provide gardens of parsley and coriander and so on which animals of all kinds including monkeys and cats seem to seek out when there is other evidence of tummy upset. We should look again, too, at the many instances of animals strangely extending what we take to be their normal dietary range; at the elephants that eat corpses; and the red deer on a Scottish Island of Rhum which need extra calcium for their antlers and eat the heads off the nesting skuas; and so on.
I suspect that for this and many other reasons, the nutrition of wild animals is not as we think it is; and that we can learn from this. Ah, you will say, but if people living as hunter-gatherers were less impoverished, pharmacologically, than we are, why didn’t they live longer? One reason, presumably, is that they could not cope with infections and were very likely to get injured. But another possihble reason is that the arms-race with plants and microbes is not over. Our human ancestors would have eaten a great many things that their bodies were truly adapted to — and which we would benefit from — but they also, perforce, consumed more toxins. As for fermentation, there is a very thin line to be trodden between delectation and putrescence, as aficionados of game birds and of exotic cheeses can attest. In fact, of course, people of all cultures enjoy games of Russian roulette: who can eat the hottest curry, who can tolerate the rottenest pheasant, and so on. The challenge for the metanutritionists, of course, is to differentiate between the components in which the plants and the microbes still have the upper hand — and so are toxic — and the ones to which we are already adapted, and indeed would benefit from.
More broadly, the concepts of cryptonutrients and of metanutrition, with the subsidiary concept of pharmacological impoverishment, suggest that we should take a new and broader look at whole areas of human endeavour which, at present, western science at least tends to treat in cavalier fashion. Herbal medicine is one such. Whether or not all the claims traditionally made for nettles or camomile or whatever are justified, it seems to me extremely likely on evolutionary grounds that human beings almost certainly would benefit in many ways from many of the agents contained in such recondite plants, which at present we disregard. It’s clear, too, that western science should take tonics more seriously. Medicines are what people take when they are obviously ill; but people take tonics even when they already feel quite well, in the hope of feeling even better. Western medicine has largely looked down upon tonics. They are widely perceived to be quackery. Partly, I suppose, this is because many alleged tonics are quackery — in the snake-oil category. But puritanism has also crept in; the largely unexamined and I suggest extremely naïve notion that asceticism is innately good, and that the human body is bound to function best when the diet is at its simplest, with nothing included that is not obviously a nutrient.
Finally, of course, westerners have tended to look down on the idea of the tonic partly because our medicine is science-based, meaning that it likes hard evidence. But it is always likely to be hard to demonstrate, critically, that intake of some recondite agent can produce some subtle but nonetheless worthwhile improvement in general wellbeing. But tonics might be good anyway. The notion that many of the strange organic molecules that our fellow creatures produce might be beneficial seems to me to be highly plausible and ideas that are plausible and potentially important surely deserve to be explored (an observation that does not apply solely to science).
So where do we go from here?
If the notions of cryptonutrients and metanutrition and pharmacological impoverishment did catch on, what difference would it make? Well, here are five possibilities:
1: First, to be very specific, these ideas surely could and should bring about significant changes in our attitude to psychotropic drugs, and with the legal and medical ways in which we deal with it. Right now drugs are a horrendous problem, but they needn’t be if only our attitudes were different.
2: At least in a few decades’ time we could see less rigid lines drawn between western, “orthodox” medicine and various forms of traditional medicine, including the many forms of herbal medicine. Future medicine might be far more eclectic, and in general less cocksure, than at present. Specifically, many forms of traditional medicine are far more concerned with tonics and palliatives than with specific therapies. There surely is vast scope for the apothecary.
3: I would hope and expect that the recent trends in the western diet — increasing reliance on just few plant types, increasing blandness, all tricked out with a crude pharmacopoeia of additives — might be reversed. We might indeed revert to a more ‘primitive’ diet — meaning, in particular, one in which we treat ourselves to a far greater range of the kinds of materials that nature has to offer, which our ancestors evolved to cope with. In detail: more spices, more herbs, more plants in general – but also more emphasis on offals and less on prime steaks and chicken breasts. Future plant breeders might accordingly become more subtle; extending their range of desiderata beyond the present obsession with yield, appearance, and shelf-life. We need pharmacological variety without frank toxicity.
4: Cryptonutrients could also explain why organic farmers and growers commonly claim that that organically raised food is more healthy – and why scientists cannot find what they consider to be a significant difference. The scientists do not look for cryptonutrients and if they did they would not recognize their significance. “Hard” evidence will always be elusive but that does not mean that the claims are invalid.
5: To develop the science of metanutrition, and reap its putative benefits, will require science and technology of a very high order. But we need, as a society — indeed as members of the human species — to ensure that the necessary science and technology do not simply become the property of particular companies, making fortunes for particular groups of shareholders. One of the outstanding tasks for the 21st century is to ensure that science and technology in all contexts become what the Russian political philosopher Ivan Illich called “tools for conviviality”.
Science as it stands is already wondrous. But as always in science the best is yet to come.
Share this article:
Leave a Reply