-
PDF
- Split View
-
Views
-
Cite
Cite
Dorian Q. Fuller, Contrasting Patterns in Crop Domestication and Domestication Rates: Recent Archaeobotanical Insights from the Old World, Annals of Botany, Volume 100, Issue 5, October 2007, Pages 903–924, https://doi.org/10.1093/aob/mcm048
- Share Icon Share
Abstract
Archaeobotany, the study of plant remains from sites of ancient human activity, provides data for studying the initial evolution of domesticated plants. An important background to this is defining the domestication syndrome, those traits by which domesticated plants differ from wild relatives. These traits include features that have been selected under the conditions of cultivation. From archaeological remains the easiest traits to study are seed size and in cereal crops the loss of natural seed dispersal.
The rate at which these features evolved and the ordering in which they evolved can now be documented for a few crops of Asia and Africa. This paper explores this in einkorn wheat (Triticum monococcum) and barley (Hordeum vulgare) from the Near East, rice (Oryza sativa) from China, mung (Vigna radiata) and urd (Vigna mungo) beans from India, and pearl millet (Pennisetum glaucum) from west Africa. Brief reference is made to similar data on lentils (Lens culinaris), peas (Pisum sativum), soybean (Glycine max) and adzuki bean (Vigna angularis). Available quantitative data from archaeological finds are compiled to explore changes with domestication. The disjunction in cereals between seed size increase and dispersal is explored, and rates at which these features evolved are estimated from archaeobotanical data. Contrasts between crops, especially between cereals and pulses, are examined.
These data suggest that in domesticated grasses, changes in grain size and shape evolved prior to non-shattering ears or panicles. Initial grain size increases may have evolved during the first centuries of cultivation, within perhaps 500–1000 years. Non-shattering infructescences were much slower, becoming fixed about 1000–2000 years later. This suggests a need to reconsider the role of sickle harvesting in domestication. Pulses, by contrast, do not show evidence for seed size increase in relation to the earliest cultivation, and seed size increase may be delayed by 2000–4000 years. This implies that conditions that were sufficient to select for larger seed size in Poaceae were not sufficient in Fabaceae. It is proposed that animal-drawn ploughs (or ards) provided the selection pressure for larger seeds in legumes. This implies different thresholds of selective pressure, for example in relation to differing seed ontogenetics and underlying genetic architecture in these families. Pearl millet (Pennisetum glaucum) may show some similarities to the pulses in terms of a lag-time before truly larger-grained forms evolved.
INTRODUCTION
This paper will consider archaeobotanical evidence for the evolutionary stages of domestication and the rates of evolution of the domestication syndrome for a select number of the best archaeologically documented crops in the Old World. The quantitative increase in archaeobotanical data in recent years indicates that the origins of crop cultivation was a dynamic and multi-stage evolutionary process that occurred independently in numerous world regions involving numerous crops. This evidence results from growth in archaeobotanical research in recent years. Systematic sampling for archaeobotanical remains, using the technique of flotation in the field, began slowly in the 1960s and 1970s, and led to the collection of much larger and more complete data sets (Pearsall, 2000; Fuller, 2002). In subsequent years, increasing numbers of sites were sampled and more practitioners of archaeobotany have joined the field. In the past two decades, refinements in identification, analysis and modelling have occurred. Taken together these mean that just in the past few years it has become possible to offer new insights into plant domestication and the multiple origins of crops.
As it has been recognized that domestication is a multi-stage process (e.g. Ford, 1985; Harris, 1989, 1996), models have summarized the expected stages and their archaeological visibility. A classic model is that of Harris (1989), who distinguishes four general stages: (1) wild plant food procurement (true hunting and gathering), (2) wild plant food production (the very beginnings of cultivation), (3) systematic cultivation (of morphologically wild plants) and finally (4) agriculture based on domesticated plants. Domestication results from the earlier stages of wild plant food production and systematic cultivation, making crops more dependent on humans for survival but also more productive. Through all of these stages people put increasing labour effort into a single unit of land and a single field of crops, in other words this tracks an intensification of production. But the reward is increased productivity of that land, and the ability to produce larger surpluses, to feed more people, or accumulate as wealth. An updated version of the Harris chart is given here (Fig. 1), in which I have modified domestication to indicate that its is a protracted quantitative process of change and that those rates may vary for different aspects of that change. Two points should be highlighted: first, that there is necessarily a stage of production (cultivation) that precedes morphological domestication, and archaeobotanists have been developing means to recognize this. Second, this transition represents an important change in human behaviour in terms of the nature of efficiency. While for foragers time is an important measure of efficiency (transport times, processing time, etc.) (see Kelly, 1995), for farmers, who have invested labour in smaller parcels of land for delayed returns, space becomes a crucial metric for efficiency, i.e. yield per unit area (Hillman and Davies, 1990: 69–70; Bar-Yosef, 1998). This cultural change is probably crucial to the change in selective pressures on plants that evolve into domesticates.

An evolutionary model from foraging to agriculture, with archaeobotanical expectations indicated at the bottom (modified from Harris, 1989). The stages of pre-domestication cultivation are shaded. In this version, domestication is represented as a process of gradual frequency change, with an earlier, more rapid ‘semi-domestication’ and a later, slower fixation of full domestication. The gap in time elapsed between these two can be taken as a minimal estimate of domestication rate (d.r.).
AGRICULTURE EVOLVING: THE INTERPLAY OF CULTURAL BEHAVIOUR AND PLANT GENETICS
The origins and evolution of agriculture involves considering how human food-procurement behaviours have changed strategically and how plant populations have evolved in response. The beginnings of food production represents a strategic shift in human behaviour, towards the manipulation of the soil environment (clearance, tillage) and through an influence on the composition of plant populations in that soil (via preferential seeding and tending of one or a few species). From the vantage point of present-day plant populations, it is possible to identify the closest wild relatives of crops, where and in what habitat they occur, and how these modern domesticates differ from their presumed wild ancestors (e.g. Zohary, 1969; Harlan, 1992). The transition from the ancient wild forms to the modern domesticated forms was an evolutionary process, and it is necessary to ask whether this can be broken down into distinct stages, relating to different selective pressures and human behaviours, and at what rate this evolutionary process took place. Archaeobotany, the study of plant remains from sites of ancient human activity, provides data for studying this process of crop evolution, which is directly correlated through its stratigraphic context with the archaeological record of human behaviour. Archaeobotany, however, has its limits imposed by preservation, and as most remains are preserved by being charred in a fire, we are usually limited to clear morphological distinctions and metrical shifts.
Before considering the archaeobotanical data, we must have in mind a framework for assessing changes. The ‘domestication syndrome’ provides such a framework by highlighting a set of characters that differ between domesticated crops and their wild ancestors (Harlan et al., 1973; Hawkes, 1983; Zohary and Hopf, 2000; Gepts, 2004). These characters can be related to different aspects of cultivation in terms of what causes them to evolve. It should be noted that the domestication syndrome differs for different kinds of crop-plants. Thus, fruit trees, vines and tubers are not ‘domesticated’ in the same way as grain crops such as cereals and pulses, in as much as they tend to be reproduced vegetatively and to represent the human choice of favourable mutants (Zohary and Spiegel-Roy, 1975; Hather, 1996; Kislev et al., 2006). This paper will consider only the domestication of seed-propagated herbaceous annuals.
There has been much discussion about how these domestications evolved. Important contributions come from field ecological studies of wild progenitors (e.g. Zohary, 1969; Hillman, 2000), experimental harvesting of wild progenitors using different methods (Harlan, 1967; Ladzinsky, 1987; Unger-Hamilton, 1989; Hillman and Davies, 1990; Anderson, 1992; Lu, 1998; Kislev et al., 2004) and experimental cultivation of wild progenitors (e.g. Oka and Morishima, 1971; Zohary, 1989; Hillman and Davies, 1990; Willcox, 1992, 1999). Nevertheless there remain important areas of debate about the nature of the changes in human practices between collecting and cultivating, and what motivated those cultural shifts, and resulting adaptive changes in the plants. While these changes require sowing of seeds stored from a previous harvest, other selection pressures are involved which can be related to harvesting methods, soil conditions (created through tillage, etc.) and perhaps crop-processing methods. The domestication syndrome in grain crops usually includes the following six criteria.
1. Elimination/reduction of natural seed dispersal
For example, non-shattering rachis in cereals, non-dehiscent pod in pulses. This is often regarded as the single most important domestication trait (‘domestication’ sensu stricto). It makes a species dependent upon the human farmer for survival. It means that instead of shedding seeds when they are mature, a plant retains them. Instead those seeds must be separated by the addition of human labour (threshing and winnowing), and then the seeds are dispersed by the farmer. Higher yields can be produced because the farmer can now wait until all, or most, of the grains on a plant have matured. This trait can only evolve under conditions of particular kinds of intensive harvesting that favour plants that retain their seeds, followed by sowing from the harvested seeds. All forms of cultivation may not select for this trait. While the evolution of this trait has often been attributed to the early use of sickle tools (e.g. Wilke et al., 1972; Hillman and Davies, 1990; Bar-Yosef, 1998), current evidence suggests that this need not be the case, as discussed below.
2. Reduction in seed dispersal aids
This is connected to the last trait but is selected for in a different way. Plants often have a range of structures that aid seed dispersal, including hairs, barbs, awns and even the general shape of the spikelet in grasses. Thus, domesticated wheat spikelets are less hairy, have shorter or no awns and are plump, whereas in the wild they are heavily haired, barbed and aerodynamic in shape (see Hillman and Davies, 1990). All of these tend to be greatly reduced in the domesticated form. This can be considered to have come about by the removal of natural selection for effective dispersal, and once removed metabolic ‘expenditure’ on these structures is reduced. This trait might evolve under initial cultivation and be regarded as part of ‘semi-domestication’.
3. Trends towards increasing seed/fruit size
This is likely to be selected for by open environments in which larger seedlings have advantages, surviving deeper burial within disturbed soils and thus this trait should be selected for by tillage and cultivation generally (Harlan et al., 1973). Larger seeds are strongly correlated with larger seedlings in many species including cereal and legume crops (Baskin and Baskin, 2001: 214; Krishnasamy and Seshu, 1989). Comparative ecology indicates that larger seeds generally have competitive advantages over smaller seeds under certain kinds of competition including deeper burial (Maranon and Grubb, 1993; Westoby et al., 1996). Although there may be some exceptions, this is demonstrated from cereal relatives (e.g. Panicum, Lolium, Avena and Aegilops among grasses; Baskin and Baskin, 2001: 212–213), and in Mediterranean habitats (Maranon and Grubb, 1993). Experimental cultivation of rice found some increase in average grain weight within just five generaions (Oka and Morishima, 1971), suggesting this can indeed evolve quickly. This trait is the key archaeological indicator of ‘semi-domestication’ in cereals.
4. Loss of germination inhibition
In the wild many seeds will only germinate after certain conditions have passed, conditions of daylength, temperature, or after the seed coat is physically damaged. Crops tend to germinate as soon as they are wet and planted. This change is often signalled by changes in the seed, such as thinner seed coats. This is also selected for simply by cultivation, and sowing from harvested yield, as those seeds that do not readily germinate will not contribute to the harvest. This has produced a particularly important character for the study of New World Chenopodium domestications (e.g. Smith, 1989; Bruno and Whitehead, 2003). So far it has been less useful for Old World crops. Although in principle it can be applied to pulses (Butler, 1989; Plitman and Kislev, 1989), seed coats are preserved rarely in charred archaeological specimens.
5. Synchronous tillering and ripening
This sometimes includes a shift from perennial to annual. Planting at one time and harvesting at one time will favour plants that grow in synchronization.
6. More compact growth habit
For example, reduction in branching, e.g. dense spikes or seed heads (the ‘sunflower effect’; Harlan, 1995: 199), e.g. from climbing habit to self-standing. Harvesting methods, like those that select for non-shattering types (no. 1, above) will also favour plants with single and compact parts to be harvested.
Of particular importance to the archaeobotanist are those changes that can be identified in archaeological material. This is likely to include nos. 1–4, although no. 4 is only preserved in certain kinds of seeds, and no. 2 may be difficult to recognize because hairs are often destroyed by carbonization. For this reason, especially for most cereals, it is criteria 1 and 3 that archaeologists look at. Grain size (no. 3) is made complicated because of the potentially wide range of variation in modern populations, and the effects of charring (which causes shrinkage and sometimes distortion). If preserved, remains of the cereal ear rachis or spikelet base can provide clear evidence for the mode of shattering (no. 1). In wild types there should be a smooth scar, indicating normal abscission, whereas in domesticated (but also in very immature) plants the scar will be rough because the ear has been broken apart by threshing.
The contrast between shattering and grain size is significant. Tough (or non-shattering) rachis ears occur as a rare, deleterious genetic mutation in most wild grass populations, which has been demonstrated in wild barley (Kislev, 1997; Kislev et al., 2004). If wild cereals were harvested simply by passing through stands and shaking or beating ears to knock seeds into a basket then the shattering, wild-type ears would be the ones to predominate in the next year's crop. Ethnographically this is the most common method documented for wild grass and forb seed gathering (e.g. Harris, 1984; Harlan, 1989). In addition, harvesting experiments in wild wheat and wild Setaria millet indicate that this is significantly more efficient in terms of return per unit of labour time (Hillman and Davies, 1990; Lu, 1998). By contrast, if people harvested with a sickle and cutting the entire ear, or plucking individual ears, or pulling plants up from the roots, this would tend to disperse shattering seeds and retain all non-shattering mutants. Therefore, these could be replanted the following year and over time would come to dominate the population at the expense of wild, shattering types (Hillman and Davies, 1990). Under ideal circumstances of new soils being sown, without a seed bank of the wild form, and sickle harvesting of mature/near-mature plants, and self-pollination or sufficient distance from wild stands, the tough rachis genotype could evolve very rapidly, with estimates of 20–100 years suggested by Hillman and Davies (1990). As will be seen below, however, the archaebotanical evidence appears to indicate a much slower process.
SCOPE
This paper will attempt to arrive at differing modal trajectories in crop domestication and rates of domestication based on dated archaeological evidence. Of particular significance is the distinction between different aspects of the domestication syndrome. First there is the evolution of grain size, or loss of dispersal appendages, such as hairs and awns, which can be called ‘semi-domestication’. Second there is the loss of wild seed dispersal (non-shattering), i.e. full ‘domestication’. The time between these two developments can be taken to define a domestication rate (‘d.r.’ in Fig. 1). This provides an archaeological estimate of the time it took for fully domesticated types to evolve in population genetics terms, i.e. to come to dominate cultivated populations and be genetically fixed. Archaeological evidence will be reviewed as it relates to the evolution of different aspects of the domestication syndrome. A small selection of crops from the Old World are explored which represent those with the best quantitative archaebotanical evidence to date, and which are drawn from across four different centres of domestication in Asia and Africa.
This paper will begin by exploring a typical cereal model for domestication, reviewing current evidence for the evolution of the domestication syndrome in some of the main cereals of the Old World: wheat and barley in the Near East, and rice in southern China. Although the available archaeobotanical evidence is quite different, current indications are that similar evolutionary processes took place in each case, with evolution of grain shape and size preceding the loss of wild-type seed dispersal.
This paper will also explore a pulse model of domestication, as the trajectory followed in the evolution of the full domestication syndrome appears to have differed in legumes. In particular, there appears to be a long lag-time between early cultivation and domestication (in terms of seed dispersal and germination inhibition) and seed size increase. In pulses, the seed size increase must be considered a form of cultivar advancement, occurring much later after the initial development of agriculture (Fuller and Harvey, 2006: 257). This can be related to changes in agricultural techniques. This will be explored through the archaeobotanical evidence for two closely related pulses domesticated in India, the mungbean (Vigna radiata) and urdbean (Vigna mungo). A similar model seems to apply to Near Eastern pulses such as lentils (Lens culinaris) and peas (Pisum sativum), and possibly east Asian legumes, adzuki bean (Vigna angularis) and soybean (Glycine max). Finally, the case of West African pearl millet (Pennisetum glaucum) is explored as a case in which a cereal shows some similarities to the pulse domestication model. A brief overview of the current state of archaeobotanical research in each case region is provided (the Near East, China, India and sub-Saharan Africa).
Archaeobotanical evidence is most often preserved carbonized, as the result of charring, and this process affects the size and shape of seeds. In general, carbonization leads to shrinkage. Based on a range of experimental studies (e.g. Lone et al., 1993; Willcox, 2004; Braadbaart and van Bergen, 2005; Nesbitt, 2006: 21), most carbonized seeds appear to shrink between 10 and 20 %, with a slight bias towards higher shrinkage in the longest dimension, i.e. a tendency to become more spherical. Therefore, in comparing carbonized seeds with modern seeds some correction factor is needed. In this paper I use a 10 % shrinkage estimate for cereals and a higher 20 % shrinkage estimate for pulses. The higher estimate for pulses should serve to over-emphasize potential seed enlargement in pulses. The most important evidence, however, remains comparisons between archaeological, carbonized assemblages of different sites and periods.
Note on ages
Throughout this paper ages are referred in calendar years BC or AD, based on the calibration of radiocarbon ages. For measured radiocarbon ages back to 11 000 BP calibration is made with measurements from tree-ring data, while older dates are calibrated by atmospheric 14C estimated from coral (Reimer et al., 2004). This was done with OxCal v.3·9 software. For this process and software, see the Oxford University Radiocarbon Laboratory webpage: http://c14.arch.ox.ac.uk/.
THE CEREAL MODEL FOR DOMESTICATION: WHEAT AND BARLEY
Near Eastern agricultural origins: general background
The Near Eastern centre of crop domestication is often called the ‘fertile crescent’, and, as is well known, a number of crop progenitors can be found in a restricted set of adjacent vegetation zones from the Mediterranean oak woodlands, through the open grassland steppe (Fig. 2). The wild wheats (Triticum spp.) and barley (Hordeum vulgare) occur in the slightly drier, more open parkland steppe with dispersed shrubs, wild almond trees and oaks, while the wild pulses of south-west Asia (Lens culinaris, Pisum sativum, Cicer arietinum, Lathyrus sativus, Vicia spp.) occur in the clearings of nearby woodlands and rocky talus slopes (Zohary and Hopf, 2000). That hunter-gatherers had long been exploiting wild cereals when available without cultivation is established from finds at Ohlalo II (21 000–18 500 BC) of wild emmer (Triticum diococcoides) and barley (Hordeum spontaneum) together with several other small-seeded grasses, fruits and acorns (Kislev et al., 1992; Weiss et al., 2004). Climatic changes at the end of the Pleistocene are regarded as important for their impact on the availability of these wild progenitors and human subsistence and are the favoured component underpinning explanations for why cultivation began. Of particular importance is the Younger Dryas dry (cold) episode from approx. 11 500 to 9800 BC (Bar-Yosef, 1998, 2003; Harris, 1998; Hillman et al., 2001; Byrd, 2005). Prior to this event the climate was favourable and dense populations of hunter-gatherers (the Late Epipalaeolithic phase, 12 500–9700 BC) were settled in territories that included some possible year-round settlements. The Younger Dryas brought an end to this, with most sites being abandoned, and it is argued that a few groups may have resorted to cultivation during this period. Village populations reappeared throughout the area during the Pre-Pottery Neolithic A (PPNA) Period (9700–8700 BC), and many of them appear to have been cultivators. Finds of domesticated plants are generally widespread in the subsequent Pre-Pottery Neolithic B (PPNB; 8700–6200 BC), and it is by this period that they began to spread beyond the domestication zone into central Turkey, Cyprus, Crete and southern Greece. Domesticated animals first occur about 8200 BC at the start of the Middle PPNB (Garrard, 2000; Bar-Yosef, 2003; Colledge et al., 2004; Byrd, 2005).

Map of south-west Asia, showing the locations of sites with archaeobotanical evidence that contributes to understanding the origins and spread of agriculture. Sites are differentiated on the basis of whether they provide evidence for pre-domestication cultivation, enlarged grains, mixed or predominantly domestic-type rachis data. Note that these sites represent a range of periods, and many sites have multiple phases of use, in which case the earliest phase with significant archaeobotanical data is represented. Shaded areas indicate the general distribution of wild progenitors (based on Zohary and Hopf, 2000, with some refinements from Willcox, 2005). It should be noted that wild emmer (Triticum dicoccoides) occurs over a sub-set of the wild barley zone, and mainly in the western part of the crescent.
In recent years two important advances have occurred in archaeobotany: the recognition of pre-domestication cultivation and evidence for different sub-centres of crop domestication within this region. Evidence for pre-domestication cultivation has been recognized through the statistical composition of wild seed assemblages, for nearly 10 years (Colledge, 1998, 2001, 2002; Harris, 1998; Willcox, 1999, 2002; Hillman, 2000; Hillman et al., 2001). As is well known from later agricultural periods, archaeobotanical assemblages are made up predominately of crops and weeds, together with some gathered fruits and nuts (Jones, 1985). This pattern is already recognized in the PPNA and in some Late Epipalaeolithic sites (Fig. 2), by samples dominated by wild cereals together with seeds of herbaceous taxa that flourish in disturbed soils, which are best known today as arable weeds. This includes taxa such as small legumes (Trifoliae), Boraginaceae, small-seeded grasses (Hordeum murinum, Bromus sp., Lolium sp., Lepturus pubescens, Lololium sp.), Polygonaceae, Silene (Caryophyllaceae), Galium (Rubiaceae) and Centaurea (Asteraceae). Modelling of the impact of Younger Dryas climate change on species availability indicates that only cereals and these weeds show frequency increases against the grain of the expected climate trends through the sequence at Abu Hureyra, suggesting their persistence in a new habitat: the arable field (Hillman et al., 2001). This is also suggested by the continued association of these same weeds with domesticated cereals in later periods (from the PPNB, 8700–6200 BC, and onwards).
A number of advances in recent years have made it increasingly clear that separate histories need to be traced for different sub-regions, which were separate in as much as the beginnings of agriculture relied on different combinations of species. Some suggestive evidence comes from genetics. For example, in barley (Hordeum vulgare) two variant genes control whether or not the ear shatters. A recessive mutation in either gene locus leads to the domesticated condition, while the dominant variant at either locus confers wild-type shattering ears (Zohary and Hopf, 2000: 59–60). The existence of these two variants argues for two domestications for barley. Recent work on emmer wheat has identified two different lineages of a gluten gene which are so different that they are estimated to have evolved apart 100 000s of years ago, and thus amongst wild emmer wheat, long before domestication. Such evidence implies two separate domestications of emmer (Allaby et al., 1999: 305; Brown, 1999; reviewed in Jones and Brown, 2000). Another source of evidence for multiple domestications of the ‘same’ (or similar) crops comes from refinements in archaeobotanical identification criteria. Thus, for example, it is possible on the basis of grain shape to distinguish einkorn wheat (Triticum monococcum) with single-grained spikelets (from wild Triticum boeoticum subsp. aegilopoides) from einkorn with two-grained spikelets (from wild T. boeoticum subsp. thaudar or T. urartu). Modern domesticated einkorn (T. monococcum) is normally only one-grained, but archaeobotanical evidence indicates the presence of one of these two grained forms as a wild cereal from the late Pleistocene in Syria (Hillman, 2000; Willcox, 2002, 2005), and later as a domesticated cereal in Syria, Turkey and into Neolithic Europe (Kreuz and Boenke, 2002). It persists in parts of Europe as late as the Iron Age, after 1000 BC (Kreuz and Boenke, 2002; Köhler-Schneider, 2003), and disappears in its Syrian homeland during the Chalcolithic, approx. 5000 BC (Van Zeist, 1999; Kreuz and Boenke, 2002). This implies an additional two-grained einkorn domestication but this crop went extinct in prehistory. Similarly, there is now evidence for an extinct emmer-like wheat, with distinctive glume architecture not yet paralleled in any studied modern wild or cultivated landrace but with some similarities to Triticum timopheevi (Jones et al., 2000; Köhler-Schneider, 2003). It is known archaeologically from the eastern half of Europe, Turkey and Turkmenistan. It persists in parts of Europe as late as the Bronze Age (approx. 1000 BC) (Köhler-Schneider, 2003). In addition, early sites in Syria appear to have cultivated a local form of rye (Secale cf. montanum), but rye did not become a major crop of the Neolithic Near East despite occasional later finds (Hillman, 2000: 392), and was probably a different species from the later European rye (Secale cereale), domesticated from a field weed in Late Bronze Age or Iron Age times (approx. 1000 BC) (Küster, 2000). Taken together the archaeobotanical morphotypes and genetics suggest a minimum of seven domestications of wheat and barley in the Near Eastern Fertile Crescent region, and there is no reason to attribute them all to a single micro-region or a single process of agricultural origins, but at least two or perhaps three (Willcox, 2005).
Cereal grain size increase
There is a growing morphometric database for wheat and barley from the Near East (Colledge, 2001, 2004; Willcox, 2004). This indicates that wheat and barley grains increased in size starting in the PPNA and earliest PPNB. This is before clear and widespread evidence for tough rachises and loss of natural seed dispersal. It is well known that wild and domesticated cereal grains differ in size and this has been used to infer the domesticated status of cereals, already in the PPNA and the earliest PPNB, including sites from the Jordan Valley (e.g. Fig. 3), the upper Euphrates in Syria, and the first settlements on Cyprus (Colledge, 2001, 2004).

Pre-Pottery Neolithic B wheat grain measurements from the Jordan Valley (after Colledge, 2001). This indicates the predominance of the larger domesticated-type grains. The gap between the two groups is comparable with that in modern reference material. These sites are dominated by wild-type barley rachis (see below).
This evolutionary shift can be illustrated from internal archaeobotanical evidence too, i.e. by charting grain metrics through time, as recently demonstrated with samples from the site of Jerf el Ahmar on the Upper Euphrates (Willcox, 2004). Fig. 4A shows the contrast between the barley grains from the early phase at Jerf el Ahmar (9500–8800 BC) and the much later Chalcolithic site of Kosak Shamali (approx. 5000 BC), in which all of the grains are larger and comparable with the domesticated size range inferred from modern material. If we look at the later phase at Jerf el Ahmar (approx. 8500 BC), however, it can be seen that many of the grains are of the larger size (Fig. 4B). This implies evolution towards larger grain size during the occupation of this site, but recovered rachis remains indicate that ears were still of the wild, shattering type. A similar pattern is found for the einkorn wheat grains (Triticum monococcum), which also include some mixture of rye (Secale cereale) (Fig. 4C, D). Given that occupation of Jerf el Ahmar lasted less than a millennium and perhaps only 500 years, we can suggest the rate of evolution of large grains, as occurring in a few centuries and certainly more rapidly than a millennium. This explains why large domestic-type grains are already widespread and predominant on most Near Eastern sites by the start of the PPNB (approx. 8700 BC). From the even earlier grain assemblage from Abu Hureyra, where cultivation has been inferred from the arable weed flora, a few large and plump rye grains were found, comparable with domesticated grains (Hillman, 2000; Hillman et al., 2001). These might represent the earliest indications of selection for grain size increase under cultivation, but possibly a local dead-end trajectory.

Scatter-plots of archaeological grain measurements showing the increase in grain size under early pre-domestication cultivation (after Willcox, 2004). (A) Barley grain measurements, comparing early Pre-Pottery Neolithic A Jerf el Ahamr with the much later domesticated material from Kosak Shimali. (B) Comparing early and late Jerf el Ahmar, indicating that shift towards larger grain size had already occurred. (C) Similar comparison of einkorn grains (probably including some rye grains) at early Jerf el Ahmar and Kosak Shimali. (D) Trend towards larger grain sizes over the course of Jerf el Ahmar occupation.
This evidence raises the question of how large-grained varieties evolved. One possibility is that methods of processing, such as using sieves after threshing and winnowing, served to bias stored cereals towards larger grains (on traditional crop-processing and the use of sieves, see Hillman, 1984). This, however, would tend to remove ‘tail grain’, i.e. the smaller grains on an ear, and would be expected to make very weak selection for genetically larger-grained plants, as the selected grains would be from the same genetic population as the discarded small grains. More likely seems to be the role of tillage. The practice of preparing land with tillage is likely to have been necessary to remove competing vegetation from favourable micro-environments for early cultivation, such as the breaks of slopes where ground-water tables are higher. Such areas would naturally be cloaked in herbaceous flora and under-shrubs such as Tamarix, so tillage would be necessary (Hillman, 2000: 396; Hillman et al., 2001: 387). Sowing grains into tilled soil with an underlying water table would give advantages to those best able to germinate successfully from greater burial, namely larger-seeded individuals.
Evolution of the tough rachis
The evolution of non-shattering ears was also a gradual process. Although theoretically it could have happened very quickly, as demonstrated under ideal experimental conditions (Hillman and Davies, 1990), the archaeobotanical evidence indicates a much more gradual evolution of non-shattering ears. A recent quantitative assessment of the wheat and barley rachis remains from a few sites suggested a gradual increase in the proportion of the domesticated-type ears over the course of the PPNB (Tanno and Willcox, 2006a). A larger data set has been compiled here in which evidence for einkorn wheat spikelet bases and barley rachis segments are considered (Fig. 5A, B). In general, there is contrast between early sites which are largely or entirely of the wild-type chaff remains and later sites dominated by domesticated-type remains, with some intermediate proportions for sites chronologically in the middle. Although there are individual site exceptions, such as barley rachis from Jordan valley sites (e.g. Wadi Jilat), these may relate to particular local circumstances of higher reliance on wild plant foods (indicated by other data from this site, Colledge, 2001). In addition, wild cereals, especially barley, may persist as weeds of cultivation. Nevertheless, at a regional level the overall trend is clear.

Proportions of wild and domesticated barley and einkorn rachis/spikelet remains on early Near Eastern sites, arranged chronologically from left to right, and grouped into broader phases. (A) Barley rachis types, including domesticated (tough), wild (shattering) and uncertain (but more likely wild). (B) Proportions of einkorn spikelet forks/glume bases, including domesticated (tough), wild (shattering) and uncertain (but more likely domesticated). Sites, approximate ages and data sources: Ohalo 2, 21 000–18 500 BC (Kislev et al., 1992); Wadi Hammeh, approx. 12 000 BC (Colledge, 2001); Mureybit, 10 500–9500 BC (Van Zeist and Bakker Heeres, 1986); Iraq-ed-Dubb, approx. 9300 BC (Colledge, 2001); Jerf el Ahmar (early, with two-grained einkorn), 9700–9300 BC (Willcox, 1999, 2002); Wadi Jilat 7, 8800–8300 BC, Wadi Jilat 13, 7000–6500 BC (Colledge, 2001); Aswad, 8700–8000 BC (Van Zeist and Bakker Heeres, 1985; Tanno and Willcox, 2006a); Azraq 31, 7500–7000 BC (Colledge, 2001); Wadi Fidan A, 7500–7000 BC, Wadi Fidan C, 7000–6500 BC (Colledge, 2001); El Kowm, 7500–6800 BC (De Moulins, 1997); Catal Hoyuk, 7400–6800 BC (Fairbairn et al., 2002); Ramad, 7500–6500 BC (Van Zeist and Bakker Heeres, 1985; Tanno and Willcox, 2006a); Magzaliyeh, 7100–6400 BC (Willcox, 2006); Tell el Kherkh, 8600–8300 BC (Tanno and Willcox 2006a, b); Nevali Cori, 8500–8000 BC (Tanno and Willcox, 2006a); Qaramel, approx. 10000 BC (Tanno and Willcox, 2006a); Netiv Hagdud, 9500–9000 BC (Kislev, 1997); Cafer Hoyuk, (IX–XIII) 8300–7700 BC, (III–VIII) 7500–7000 BC (De Moulins, 1997); Kosak Shamali, approx. 5000 BC (Tanno and Willcox, 2006a).
These data can then be transformed to produce a regression of domestication rate over time (Fig. 6A, B). In these diagrams individual site proportions are plotted, both by the minimal figure, based on just the clear non-dehiscent type, and a larger figure which includes uncertain but possible domesticates. The true proportion of domesticates should fall within this range. Sites are plotted against a median age estimate for each site. In addition, for barley the averages from each phase (e.g. PPNA, Early PPNB, Late PPNB) are plotted. The trend from wild dominance to domesticated dominance is clear but appears fairly gradual and slow. The rates of evolution do not come anywhere near the 20–100 years estimated by Hillman and Davies (1990) on the assumption of sickle harvesting morphologically wild near-mature plants, or uprooting of whole plants. This vast difference in rate raises questions about how we explain the selection for domesticated non-shattering genotypes, an issue to which I return below.

Domestication rates in barley and einkorn modelled from archaeobotanical data (based on Fig. 5). Proportion of domesticated type for each site is plotted by a box against a median estimate of site age. A margin of error is indicated by the line which connects the sum of domesticated and uncertain types (indicated by a cross or x). Trend lines are shown based on the lower estimate. (A) Barley domestication rate model, on which period averages are also plotted for the PPNA, Early PPNB and Late PPNB, in which the diamond indicates the proportion of domesticated types and the circle the sum of domesticated and uncertain types. (B) Einkorn domestication rate model; the much later Kosak Shamali has been excluded.
From these data we can also suggest that domestication occurred somewhat more quickly in wheat, perhaps around 1500 years, as opposed to barley which shifts over a period of 2000 years or slightly more. An earlier domestication date for wheat has been previously noted (e.g. McCorriston, 2000). A number of factors might be suggested to account for this. Slightly higher average rates of cross-pollination are reported in barley, estimated at about 1–2 % (Allard, 1988; Morrell et al., 2005), as opposed to wheats, which are less than 1 % (Hillman and Davies, 1990: 62), although this seems unlikely to be sufficient in itself. Of importance may be the fact that wild barley (Hordeum spontaneum) is more prone to be a weed of cultivation, thereby maintaining introgression, whereas wheat cultivation, which is more water-demanding and may have been more carefully located, would have been cut off from the wild progenitor. In addition, the persistence of wild barley as a weed will affect the composition of archaeobotanical assemblages and rachis counts may therefore be an underestimate of the proportion of domesticated types among crop populations.
Most remarkable is the difference in evolutionary rate by comparison with grain size (semi-domestication), of 500–1000 years. The shift in rachises (full domestication) appears to start mainly after large grain size had already begun to evolve, and we might therefore suggest a minimum estimate of 2000 years for the evolution of both aspects of the domestication syndrome.
CHINESE RICE DOMESTICATION: THE IMPACT OF UNEVEN RIPENING
Chinese agricultural origins: background
Systematic archaeobotany in China has only recently begun and evidence for the beginnings of agriculture is still limited (Fig. 7). In northern China the millets Setaria italica and Panicum miliaceum were the initial crops. The earliest well-documented millets are from approx. 6000 BC at Xinglonggou, in Eastern Inner Mongolia, by which time plump-grained Setaria were already established but grains of P. miliaceum were near the wild-type in size and shape (Zhao, 2005). Subsequently, millet cultivation was established in much of the Yellow River basin by 5500 BC (the Peiligang, Cishan, Beixin and Dadiwan cultures) (Lu, 1999; Crawford, 2005; Crawford et al., 2005). Rice from the south was added to this agricultural system only in the third millennium BC, with a few rice finds from Late Yangshao contexts (3000–2500 BC) and many more from the Longshan period (2500–2000 BC). In recent years, the orthodoxy has been that rice agriculture began early, perhaps at the start of the Holocene or late Pleistocene, in the Middle Yangtze, perhaps amongst seasonally inhabited cave-sites (e.g. Lu, 1999; Crawford, 2005). Critical re-assessment, however, suggests that rice may have been independently domesticated in both the middle and the lower Yangzte river areas, and that domestication was later than has been presumed (Fuller et al., 2007).

Map of East Asia indicating early millet and rice sites, with inset of Yangtze region, showing archaeological sites mentioned in this article: 1, Hemudu; 2, Tianluoshan; 3, Kuahuqiao; 4, Shangshan; 5, Liangzhu; 6, Majiabang area, including Nanzhuangqiao, Luojiajiao and Pu'anqiao; 7, Nanhebang; 8, Maqiao; 9, Songze; 10, Xujiawan; 11, Chuodun; 12, Weidun; 13, Longnan and Caoxieshan; 14, Qiucheng; 15, Longqiuzhuang; 16, Sanxingcun; 17, Lingjiatan; 18, Jiahu; 19, Bashidang; 20, Pengtoushan.
Domestication as morphometric and maturity shift
It is now well established that Oryza sativa was domesticated more than once, with distinct origins of tropical monsoonal indica and marshland, sub-tropical japonica. Genetic studies are now numerous in support of these separate origins (e.g. Chen et al., 1993; Cheng et al., 2003; Londo et al., 2006). Archaeological evidence also favours distinct centres of early rice cultivation in the Middle Ganges valley, India (Fuller, 2002, 2006), and the Middle or Lower Yangtze, China (Lu, 1999). This situation falsifies the older suggestions regarding Chinese Neolithic rice as predominantly indica, or an ancient ‘intermediate type’ that evolved into both indica and japonica. In light of this, I and collaborators have reconsidered published sources on early Chinese rice, and begun new archaeobotanical investigations (Fuller et al., 2007). Similar research has been recently pursued on Indian rice (Harvey, 2006). Previous studies essentially began with the assumption that archaeological rice was a crop, and only asked instead whether it was indica, japonica or something ‘intermediate’ (e.g. Oka, 1988; Zhang, 2002). In fact, the morphometric data of early rice often fit well within the range of wild species, including Oryza rufipogon but also sometimes Oryza spp. not in the sativa-complex. On this basis we have suggested that another wild species might sometimes have been utilized as a hunter-gatherer resource (Fig. 8A, B). Based on modern geographical distributions O. officinalis might be suggested (cf. Vaughan, 1994), and this species produces substantial grain numbers per plant, although further consideration of feasibility must take into account potential stand sizes (generally small), as well as potential processing techniques that would be required for dehusking this thick-hulled species. Other Oryza species or a particularly small-grained race of O. rufipogon should also be considered.

(A) Scatter plot of length and width of grains measured in modern populations (15 grains measured from 72 populations; Fuller et al., 2007). (B) Grain measurements from selected Neolithic sites, showing maximum and minimum measured ranges with solid lines and statistical standard deviations with dashed lines (as reported). Note that grains from Kuahuqiao, Bashidang and the lower (Majiabang period) levels (8–6) at Longqiuzhuang fall largely or entirely in the expected immature grain proportions, while the latest grains from Longqiuzhuang, Songze period (level 4), indicate a clear shift towards longer and fatter grains that can be regarded as fully mature, and thus domesticated. Chouden (Late Majiabang) also indicates a shift towards mature japonica-type grains, but suggests local population differences from the domesticated rice at Longqiuzhuang. The small grains from Jiahu are suggestive of wild rice not in the sativa complex, such as O. officinalis. Sources: Kuahuqiao (Zheng et al., 2004b), Longqiuzhuang (Huang and Zhang, 2000), Jiahu (Henan Provincial Institute of Archaeology, 1999), Chuodun (Tang, 2003) and Middle Yangzte Bashidang (Pei, 1998).
Much of the grain, however, is notably long and skinny, which could also be a trait of immature rice (Fig. 8B). Consideration of rice development, with its extended period of uneven ripening (more than 2 weeks, Hoshikawa, 1993), and expectations from optimal foraging theory and ethnographic expectations about harvesting lead us to expect early users of wild rices to have targeted immature plants to maximize their recovery of near-mature grains. Indeed in order to maximize grain yields wild rice should be targeted in the first 3 or 4 d after the very first grains mature, but at this time slightly more than 40 % of the grains are still immature by more than 6 d. Within a week potential yields are vastly reduced by something on the order of 70–80 % of mature grains (Fig. 9). As with ethnographically documented hunter-gatherers, we would therefore expect wild rice to be harvested when immature (e.g. the Bagundji wild Panicum gatherers of Australia: Allen, 1974; cf. Harris, 1984).

(A) Graph indicating the expected relative frequency of grains reaching maturity during eight stages, of 2 d each, on an individual rice plant (based on anthesis data of a modern japonica cultivar, from Hoshikawa, 1993). (B) A graph converting this data into potential grain yields to a hunter-gather if this were a morphologically wild plant. This indicates the proportion of grains more than 6 d immature which would differ in grain proportions from mature grains. Although this represents an individual plant it must be assumed that a population of wild rice has individuals that begin this process at different times over a period of weeks. With cultivation there should be a tendency for plants to become synchronous.
Archaeologically, the morphometric data available from Kuahuqiao and Longqiuzhuang (mainly of the earlier Majiabang period, which equates with the Hemudu period) suggests that grain assemblages are dominated by immature grains (Fig. 8B). It must be noted that this assumes the mature grains would have been in the range of modern domesticates, but they do appear somewhat plumper than expected for mature O. rufipogon. This in fact is plausible if we consider the likelihood that early pre-domestication cultivation had already begun to select for larger grains. It must be noted, however, that grain size evolution in rice has been complex and not always towards larger grain sizes. Many short-grained japonica races may have shorter, but often still plumper, grains than those in the wild progenitor. It is, nevertheless, clear that grain shape has changed under domestication and may also be affected by the proportions of mature and immature grains represented in a harvest.
In addition, the reduction in hairs on the awns of rice recovered from Hemudu (Sato, 2002) implies relaxation of natural selection for seed dispersal aids, which would be expected under cultivation. The Hemudu culture is rightly famous for it numerous wood and bone spades, tools which imply soil manipulation and probably planting. This is in contrast to Kuahuqiao, which produced just two poorly made proto-type spades. The slight difference in grain thickness between Kuahuqiao (6000–5400 BC) and the Majiabang period Longqiuzhuang (5000–4000 BC) could therefore suggest selection for larger grains under early cultivation, and we might therefore suggest the beginnings of cultivation between the end of the Kuahuqiao phase (approx. 5400 BC) and the early Hemudu/Majaiabang phase (from 5000 BC). This would imply a rate of evolution on a par with the grain-size increase in wheat and barley. Thus, the evidence from Hemudu and the earlier Majiabang period (from early to mid fifth millennium BC) both suggest pre-domestication cultivation, which may have started already at the end of or after the Kuahuqiao period. The rice at this stage can be regarded as semi-domesticated, like that of PPNA to early PPNB wheat and barley.
A clear contrast is seen with assemblages from around 4000 BC. The latest assemblage from Longqiuzhuang (early Songze period, just after approx. 4000 BC) has grains which are significantly longer, plumper (2·5–3 mm) and most likely fully mature. In addition, significantly plumper grains have been recovered from Chuodun (late Majiabang, just before approx. 4000 BC). This suggests an important morphological shift in archaeological rice occurred in the Lower Yangtze region during the later fifth millennium BC. This shift seems most likely to be due to a shift towards harvesting of mature panicles as opposed to immature panicles, rather than an evolutionary development in grain shape. Such a shift would imply that it became feasible to allow grains to mature on the plant without loss of the grains, or in other words that tough, domesticated-type rachises had evolved to dominate the rice populations being harvested. This hypothesis is currently being tested by myself and collaborators on large assemblages of preserved rice spikelet bases from the fifth millennium BC site of Tian Luo Shan. Additional evidence comes from measurements on bulliform phytoliths, silica bodies from the leaves of rice (Fig. 10). These bulliforms shift towards larger sizes between the Majiabang and Songze periods, i.e. before and after 4000 BC. Modern studies suggest that the size of these phytoliths relates to plant maturity and tillering stage (Zheng et al., 2003).

Size increase in Lower Yangzi rice phytoliths. (A) Measured horizontal length and vertical length of rice bulliform phytoliths from Majiabang period samples (M); (B) measurements from samples of the subsequent Songze (S) and Liangzhu (L) phases. The dashed oval represents the distribution of the Majiabang measurements. Data re-plotted from Zheng et al. (1994, 2004a, b) and Wang and Ding (2000).
These data suggest a rate of evolution from the beginning of cultivation to full domestication on the order of 1000–1500 years, similar in magnitude but slightly faster than that in the Near East. An important factor in this is that Late Majiabang period sites are known that preserve early paddy field systems. These consist of networks of shallow pools, and ditches, as at Caioxieshan and Chuodun (Zou et al., 2000; Gu, 2003). This development in cultivation techniques would imply greater separation of sown populations from wild stands, and a reduction therefore in cross-pollination with free-growing wild rice, which has been estimated at 10–50 % (Oka and Morishima, 1967). This separation into distinct field systems may be highly significant, as, theoretically, out-breeding species should evolve more slowly towards adaptations to cultivation (Allard, 1988).
Also important is evidence for climatic change, which is likely to have been reducing populations of wild rice. Globally the mid Holocene, after 6000 BC, and especially between 5000 and 3000 BC, was a period of cooling, as indicated for example by declining methane levels from Greenland ice cores (Blunier et al., 1995). This is linked to declines in monsoon rainfall in East Africa (Gasse, 2000), South Asia (Madella and Fuller, 2006) and East Asia (Wang et al., 1999). This process would be expected to have impacted South China, pushing more tropical taxa such as wild rice further south. Also significant is pollen core evidence from the Lower Yangzte region that indicates a marked decline in broad-leaved trees, including oaks, after 5200 BC (Yu et al., 2000; Yi et al., 2003; Tao et al., 2006).
This is significant because evidence from Kuahuqiao, Hemudu and Tian Luo Shan suggests that it was acorns (Quercus, Lithocarpus, Cyclobalanopsis) that were the dietary staples of these cultures. These were stored in vast quantities in storage pits, along with water chestnuts (Trapa bispinosa) (Zhejiang Province Institute of Archaeology, 2003; 2004; my unpubl. data). As comparative cases indicate, e.g. ethnographic California or the archaeology of the Jomon culture of Japan, tree nuts, and particularly acorns, make ideal staple food sources (e.g. Heizer and Elsasser, 1980: 82–144; Takahashi and Hosoya, 2002; Barlow and Heck, 2002). Therefore, the decline in nut-bearing trees may have promoted increased reliance on, and cultivation of, rice (Fuller et al., 2007).
These climatic factors may also explain the apparent difference in the domestication rate between Near Eastern cereals and East Asian rice. In the Early Holocene, the climate was warmer and wetter and therefore conducive to the expansion and persistence of wild wheat and barley stands in the Near East. By contrast, in Mid-Holocene China wild rice would have been in decline and on its way to local extinction. This may have promoted a somewhat faster rate of domestication.
THE PULSE DOMESTICATION MODEL: SEED SIZE AS EPIPHENOMENON
The domestication syndrome for pulses is essentially the same as that for cereals, with increases in seed size, reduced pod-shattering and importantly loss of germination inhibition (Plitman and Kislev, 1989; Smartt, 1990; Zohary and Hopf, 2000). This need not imply, however, that the rates of domestication and the order in which these traits evolved are necessarily the same. In fact, evidence suggests a difference from cereals. This can be illustrated with evidence for the early evolution of the Indian mungbean (Vigna radiata) and urdbean (Vigna mungo) under cultivation.
South Asia has received less attention from archaeologists studying early agriculture, although a large number of minor cereals (millets), pulses, cucurbitaceae crops and indica-type rices were domesticated there. Archaeology together with the modern distributions of wild progenitors suggest as least three, and perhaps five, distinct centres of plant domestication in India, including South India, Gujarat and the Ganges Plain (for a comprehensive review, see Fuller, 2006; also Fuller, 2002). These domestication events are associated with the mid-Holocene, between 4000 and 2000 BC, and occurred amongst local populations of hunter-gatherers in monsoonal India, although some influences from the west due to the spread of livestock cannot be ruled out. In the north-west, early agriculture was based on the dispersal of wheat, barley and winter pulse cultivation from the Near East, together with domesticated animals. Despite the long delay before agriculture developed in other parts of India the earliest phases suggest local packages of crops into which wheat, barley or Near Eastern pulses were adopted.
The current distribution of wild mung and urd, taken together with their archaeological record, suggests three areas of domestication (Fig. 11). Vigna radiata was probably brought under cultivation in the north-west, perhaps in the Himalayan foothills in the Punjab region, and in the far south, as early finds in both regions are widely separated. Early finds of urdbean come from Gujarat and the northern Peninsula where wild populations of this species persist (Vigna mungo var. sylvestris) without associated wild mungbean (Vigna radiata var. sublobata sensu stricto) (Fuller and Harvey, 2006; see also Tomooka et al., 2003). Early finds in South India of V. radiata occur in the driest savannah environments of the peninsula together with large quantities of horsegram (Macrotyloma uniflorum) and small millets (Brachiaria ramosa, Setaria verticillata), and in some cases associated with introduced domesticates (Fuller et al., 2004). The association with other crops, their large quantity and the difference from the habitat of the wild progenitor all imply cultivation of these early mungbeans (on the past environmental conditions at these sites, see Fuller and Korisettar, 2004).

A map of the wild progenitors of Vigna radiata and V. mungo in India in relation to the moist deciduous forests and the region with extensive wild rice populations (based on Tomooka et al., 2003; Fuller and Harvey, 2006). Archaeobotanical finds of the Vigna pulses are indicated which include secure species-level identifications. Sites are numbered: 1, Semthan; 2, Hund; 3, Balu; 4, Kunal; 5, Burthana Tigrana; 6, Mitithal; 7, Hulas; 8, Hulaskhera; 9, Charda; 10, Imlidh-Kurd; 11, Narhan; 12, Khairadih; 13, Malhar; 14, Senuwar; 15, Tokwa; 16, Mahagara; 17, Koldihwa; 18, Balathal; 19, Babar Kot; 20, Rojdi (two phases); 21, Oriyo Timbo; 22, Kaothe; 23, Tuljapur Garhi; 24, Paithan; 25, Apegaon; 26, Bhokardan; 27, Nevasa; 28, Inamgaon; 29, Terr; 30, Golabai Sassan; 31, Piklihal; 32, Hallur; 33, Tekkalakota, Kurugodu, Sanganakallu and Hiregudda; 34, Hattibelagallu; 35, Sanyasula Cave; 36, Veerapuram; 37, Rupanagudi; 38, Ramapuram, Hanumantaraopeta and Peddamudiyam; 39, Kodumanal; 40, Perur. For details of primary sources, see Fuller and Harvey (2006).
These early Neolithic pulses, however, do not appear to be different in size from their wild progenitor (Fig. 12A, B). Even assuming the upper end of 20 % shrinkage, measured specimens from southern Neolithic sites fall almost entirely within the size range represented by modern wild populations (Fuller and Harvey, 2006). By contrast, those from Early Historic to Early Medieval periods fall in the range of modern domesticated populations (Fig. 13A). These data suggest on peninsular India selection for increasing pulse seed size was slight through the second millennium BC but had occurred by the first millennium AD. The evidence from late Chalcolithic Tuljapur Garhi (late second millennium BC to early first millennium BC) shows a wide size range from wild-type to domesticated-type, although the average falls in the latter range. This site might represent evidence for the actual process of size increase, suggesting that this occurred most markedly during the late second millennium BC through the Iron Age.

Metrical data for Indian Vigna pulses indicating no size increase with domestication. (A) Modern length–width measurements in Vigna radiata, V. mungo and their wild progenitors. The separation between them has been adjusted for a high estimate of 20% shrinkage with carbonization and shown on the other plots as a dashed line. (B) Neolithic Vigna finds from South India (before 1400 BC) (data from Fuller and Harvey. 2006).

(A) Late Chalcolithic (1400–900 BC), Iron Age (900–200 BC) and Early Historic (200 BC – 400 AD) Vigna finds from South India, indicating size increase. (B) Archaeological Vigna measurements from northern India, including Ganges valley Neolithic, Iron Age and Harappan Bronze Age civilization finds, suggesting a correlation between larger size and plough agriculture (data from Fuller and Harvey, 2006).
These data raise the question of what change in the environmental conditions, most likely in terms of agricultural practices, selected for seed enlargement in these pulses. Presumably domestication in terms of dispersal and germination was selected for early and rapidly, necessary to make any pulse a worthwhile crop (Ladzinsky, 1987, 1993; Zohary, 1989; Blumler, 1991). If it was being routinely harvested, either by pod-plucking or plant uprooting, and sown from stores we would expect domesticated forms to have evolved with less dehiscent pods and loss of dormancy. But size increase was not part of the initial domestication syndrome. What explains the delay? I would suggest that changes in agricultural techniques, such as deeper tillage (with ards), created a selective advantage for larger seeded genotypes (Fuller and Harvey, 2006: 257). In this regard the shift during the Iron Age (first millennium BC) is significant as this is the period when ard tillage began on the Indian peninsula, and perhaps in the latest second millennium BC (cf. Shinde, 1987; Fuller et al., 2001).
Measurements from northern India suggest a similar pattern in the middle Ganges and Orissa, but suggest a contrast with early pulses from the north-west (Fig. 13D). What can be seen in the plot is that Neolithic seeds fall largely in the expected primitive size range, while later sites such as late Chalcolithic Narhan and later Iron Age and Early Historic sites have large Vigna grains. Interestingly, two early sites, Balu and Kunal from the later third millennium BC, also have large Vigna seeds. Both of these sites, however, are within the Eastern Indus Civilization cultural zone, where we expect deep tillage with ards to have been the norm (Allchin and Allchin, 1997: 170; Miller, 2003). It might be suggested that in the Indus zone enlarged seeds have already been selected for by tillage in the third millennium, whereas further east this process did no occur until the late second millennium BC. This of course raises the questions of whether the Vigna pulses introduced into the middle Ganges zone had diffused before emergence of large-seeded forms, or had diffused from the south rather than the Harappan zone, or whether the absence of positive selection through tillage could have led to a reversion to smaller seed size.
This model provides an explanation for the archaeological difficulty in identifying the beginnings of pulse domestication. Pods are not normally preserved, and germination characters are cryptic in rarely preserved seed coats (Butler, 1989). Only in peas does a clear distinction exist between smooth-testa domesticates (Pisum sativum spp. sativum) and tuberculate-testa wild peas (Pisum sativum ssp. elatius) (Butler, 1989; Zohary and Hopf, 2000). Early finds of pulses co-occur with cereal crops and weed assemblages, suggesting that they too were crops, but no measurable change in seed size is noted (Garrard, 2000; Zohary and Hopf, 2000; Hillman et al., 2001; Tanno and Willcox, 2006b; Weiss et al., 2006). Even millennia after the beginnings of cereal cultivation and the establishment of domesticated cereals, when agriculture had dispersed to Europe, finds of peas and lentils (e.g. from Neolithic and Early Brone Age Greece, Bulgaria, Germany and the Netherlands) are little different from their pre-agricultural Near Eastern ancestors in terms of average and minimum sizes, although there is a slight tendency for larger size maxima (e.g. Van Zeist and Bottema, 1971; Bakels, 1978; Hopf, 1973; Housely, 1981; Hansen, 1991). Perhaps the earliest sites with notably large lentils and peas are some Near Eastern pottery Neolithic sites (approx. 6000–5000 BC), such as Erbaba in Turkey and Tepe Sabz in Iran (Helbaek, 1969; Van Zeist and Buitenhuis, 1983), but such sites are the minority, with consistently enlarged pulse seeds encountered only in the Late Bronze Age, Iron Age and Roman periods. Taken together this suggests that there was an even longer delay in serious size increase in Near Eastern pulses than in Indian Vigna, something on the order 2000–4000 years, or even longer in some regions. It also raises the possibility that some size increases in particular regions and periods were contingent on local circumstances of selection. It may also be noted that limited data on early East Asian soybeans (Glycine max) and adzuki beans (Vigna angularis) indicate that as late as the third millennium BC they are still largely within the wild size range, with enlarged seed sizes from the later Bronze Age or Iron Age (Crawford and Lee, 2003; Crawford et al., 2005), periods in which ploughs were established.
A PULSE-LIKE CEREAL? THE CASE OF AFRICAN PEARL MILLET
A final interesting case is provided by pearl millet (Pennisetum glaucum), an important domesticate of West Africa (D'Andrea and Casey, 2002; Fuller, 2003; Zach and Klee, 2003). Archaeobotanical research in Africa remains very limited especially given the size of the continent. Most studies have focused on Egypt, but there is a growing database from across the Sahara and in sub-Saharan West Africa. As outlined by Harlan (1971) there are a wide range of crops with varying wild progenitor distributions in Africa (Fig. 14). In terms of major cereal domestications the northern savannahs and the Sahel zone south of the Sahara are important. Sorghum bicolor ssp. bicolor is likely to have originated from the eastern part of the savannahs, such as between Lake Chad and the Western Sudan, or north-west Ethiopia (Harlan and Stemler, 1976). By contrast, pearl millet has been linked to one, or more likely two, domestications in the Sahel zone west of Lake Chad (Tostain, 1992; Fuller, 2003; Neumann, 2003). An important factor in where and when agriculture developed in Africa is climate change. As wetter conditions of the early (10 000–6200 BC) and mid Holocene (5900–2200 BC) caused a northward shift in vegetation zones and much of the Sahara was savanna-like. This environment supported populations of pottery-making hunter-gatherers, who utilized a range of wild millet-grasses, including wild Sorghum bicolor in the east (Egypt's Western Desert), and these groups adopted livestock early, certainly by 6000–5000 BC (Marshall and Hildebrand, 2002; Fuller, 2005). As conditions dried, after 3500 BC, and populations with their herds were forced south, some appear to have taken up cultivation. Evidence for the actual transition to agriculture remains elusive but in the early second millennium BC widely dispersed populations in West Africa were cultivating morphologically domesticated pearl millet (D'Andrea and Casey, 2002). That pearl millet had dispersed to India by approx. 1700 BC (Fuller, 2003) suggests that this process began by the third millennium BC and saw rapid dispersal of cropping across the northern savannas.

A synoptic geography of early agricultural developments and precursors in Africa. Shown are the modern distributions of wild Sorghum bicolor and Pennisetum glaucum with genetic connections to the domesticates (after Harlan, 1971; Tostian, 1992). The previously wetter conditions imply a northward shift in the Sahara–Sahel transition (see Gasse, 2000; Marshall and Hildebrand, 2002). Early Holocene ceramic-using forager sites based on Jesse (2003), and mid-Holocene pastoral sites based on Jousse (2004). Early evidence for wild sorghum gathering is indicated (based on Stemler, 1990; Barakat and Fahmy, 1999; Wasylikowa and Dahlberg, 1999). The spread of Near Eastern crops is indicated in the Nile Valley vis-à-vis the pre-ceramic Neolithic distribution in the Eastern Mediterranean. Sites with early pearl millet are numbered: 1, Dhar Tichitt sites (cited in D'Andrea and Casey, 2002); 2, Dhar Oualata sites (Amblard and Pernes, 1989); 3, Djiganyai (MacDonald et al., 2003); 4, Winde Koroji (MacDonald, 1996); 5, Karkarichinkat (cited in D'Andrea and Casey, 2002); 6, Ti-n-Akof (cited in D'Andrea and Casey, 2002); 7, Oursi (cited in D'Andrea and Casey, 2002); 8, Birimi (D'Andrea et al., 2001); 9, Ganjigana (Klee et al., 2004); 10, Kursakata (Zach and Klee, 2003). Historical sites with pearl millet metrical data: 11, Arondo (cited in Zach and Klee, 2003); 12, Jarma (Pelling, 2005); 13, Qasr Ibrim (Steele and Bunting, 1982).
Pearl millet domestication is inferred from two sets of evidence. First, there was loss of natural seed shedding, which is linked to the shift from sessile involucres to development of a non-dehiscent peduncle (Poncet et al., 2000). This shift is already evident from ceramic impressions of pearl millet chaff by 1700–1500 BC in Mauretania (Amblard and Pernes, 1989; MacDonald et al., 2003), and slightly later in Nigeria (Klee et al., 2000, 2004). Pearl millet shows a subtle but clear change in grain shape, becoming apically thicker and more club-shaped than its wild counterpart, i.e. an increased thickness/breadth ratio (Brunken et al., 1977; D'Andrea et al., 2001; Zach and Klee, 2003). However, a major increase in seed size appears delayed (D'Andrea et al., 2001: 346). Figure 15A shows the expected range of domesticated and wild forms based on reported modern populations, adjusted for 10 % shrinkage. Figure 15B plots some archaeological populations.

Scatter plots of pearl millet (Pennisetum glaucum) grain width vs. thickness. (A) Modern population averages and minima of domesticated populations reduced by 10% to account for shrinkage, compared with modern wild population averages with maxima, reduced by 10%. Dashed line indicates expected separation between wild and domesticated forms. Sources: Brunken et al. (1977) and Zach and Klee (2003). (B) Plots of archaeological site averages and ranges. Early West Africa averages (Birimi, 1700–1500 BCE; Kursakata, 1500–800 BCE) fall in the wild zone although ranges extend into the larger domesticated zone. The earliest finds in India (Surkotada, approx. 1700 BCE) are close to these as are Early Historic (200 BCE–300 CE) Nevasa in southern India. North Indian Narhan (1400–800 BCE) shows a marked shift towards larger sizes comparable with modern domesticates, as does early medieval Qasr Ibrim (Egypt, approx. 450 CE: this find is preserved by dessication and has been reduced to be comparable with carbonized material). Jarma in south-west Libya may show an apparent shift towards somewhat larger grains during the early first millennium CE, comparable with the size found in medieval Senegal at Arundo. Later Medieval Jarma has shifted back towards to near wild size range. Sources: Kajale (1977), Steele and Bunting (1982), Chanchala (1995), Saraswat et al. (1994), D'Andrea et al. (2001) and Zach and Klee (2003) (Jarma data from Ruth Pelling, personal communication).
Of note is that early West Africa populations, from the second and first millennia BC, have their averages firmly in the wild size range, although there are long tails of variation that extend into the larger size range (e.g. at Birimi). One of the earliest finds of pearl millet from India comes from Surkotada, Gujarat, approx. 1700 BC, which can be seen to fall with these early domesticated African populations. By contrast, a rather later, Gangetic population from Narhan (where ard tillage is established, and where larger Vigna pulses were present, see Fig. 13D) is markedly larger, suggesting selection for larger-grained pearl millet. In Africa larger grained populations appear in the first millennium AD, represented by finds from Nubia and Libya, as well as Medieval Senegal. However, the continued small-grained populations in Early Historic South India (Nevasa) and apparent reversion in later Medieval Libya suggests that there may be factors that work against gigantism in pearl millet, and in the absence reinforcing selection populations may tend towards the smaller size ranges.
This raises questions about the selection pressures involved in large-grained Pennisetum. As both Libya and South India lack wild populations, this cannot be attributed to cross-pollination with wild types. There may be some constraints particular to this crop, as one experiment indicates that optimal germination occurred under higher temperatures that resulted in lower average grain weights (Mohamed et al., 1985). In addition, pearl millet involucres are polymorphic in grain count with the vast majority producing two grains, a large minority with one larger grain, and a further minority producing 3–9 grains, which are necessarily smaller (Godbole, 1925). Thus, selection for higher grain counts, and more reliable germination, might conflict with selection for larger seed sizes. Nevertheless as a working hypothesis, I would propose that, as with pulses, there is a deeper burial threshold that selected for gigantism in pearl millet at some times and in some locations. In that regard it might be noted that the larger grain populations in Libya and Nubia, like that in Gangetic India, are associated with more intensive plough cultures, whereas ards were not present in West Africa and may have declined in post-Garamantean Libya. Thus, we can hypothesize that large-grained varieties evolved under plough systems and then dispersed back to West Africa at a later date. If so, this would imply separate events of grain enlargement in India and north-eastern Africa. While initial cultivation must have selected for non-shattering, and slight changes in grain weight and shape (the club shape), serious gigantism may have required a stronger selection pressure and therefore evolved later: a millennium or more later in India, and two millennia later in Africa.
CONCLUSIONS
Archaeological evidence indicates that the entire domestication syndrome did not suddenly appear when people began to cultivate plants. Rather, different aspects of the syndrome evolved in response to the new ecological conditions of early cultivation. What these data suggest is that in domesticated grasses, changes in grain size and shape (‘semi-domestication’) evolved prior to non-shattering ears or panicles (‘domestication’ sensu stricto). While initial grain size increases may have evolved during the first centuries of cultivation, within perhaps 500–1000 years, non-shattering was much slower, becoming fixed about 1000–2000 years subsequently. Pulses by contrast do not show evidence for seed size increase in relation to the earliest cultivation, but seed size increase may be delayed by 2000–4000 years. This implies that conditions that were sufficient to select for larger seed-size in Poaceae were not sufficient in Fabaceae. This implies different thresholds of selective pressure in relation to differing seed ontogenetics and underlying genetic architecture in these families. Pearl millet (Pennisetum glaucum) may show some similarities to the pulses in terms of a lag-time before truly larger-grained forms evolved. These results may aid in predicting when and where certain crop domestications are likely to have occurred based on counting backwards from the earliest known domestic finds. Thus, for example, we would predict that pearl millet cultivation began by 3200–2700 BC. These results also raise questions about taxonomically linked differences in evolution under the selection forces of cultivation.
Reconsidering sickles and cereal domestication
There has been a tendency to assume that harvesting with a sickle was the selective force that led to domestication, i.e. non-shattering (as discussed above). The archaeological evidence, however, does not support this in any documented case. In China, as discussed already, rice grains begin to plump and increase in size but domestication is indicated by the shift towards predominantly mature-grained harvests (and inferred non-shattering), during the fifth millennium BC, and by approx. 4000 BC. In this region there are no clear archaeological sickles until after 3500 BC, the Later Songze period (approx. 3500 BC), after which they become widespread in the Liangzhu culture (3300–2200 BC). These sickles may be a cultural borrowing from millet cultivators in central China, where such tools were in use since at least 5000 BC (cf. Chang, 1986). Even in central and northern China, the earliest sickles occur at sites that already have millet cultivation, and earliest documented domestic millets from Xinglonggou (near Chifeng, China), before 6000 BC (Zhao, 2005), come from a culture without sickles. In China, sickles consistently represent a technology development after domesticated plants are fully established.
In the Near East sickles were in use prior to agriculture and must now be argued to be transferred to agriculture relatively late, after domestication. Preserved sickles, and more commonly lithic sickle blades, are known from Natufian contexts (13 000–10 500 BC), in a period for which there is no evidence for domesticates, and non-shattering domesticates continued to be absent through the PPNA (through 8800 BC) (see Fig. 3). Microscopic studies of ‘sickle gloss’ have been used to suggest they were cereal-harvesting (Unger-Hamilton, 1989; Anderson, 1992), but we cannot rule out harvesting of sedges (Cyperaceae) and reeds (Phragmites) as materials for basketry or thatching. As suggested by Sauer (1958), the early Natufian sickles were prototype saws, designed for raw material gathering rather than seed collecting. As indicated by the archaeobotanical evidence reviewed above, the rate of evolution of tough rachis einkorn and barley is far too slow to be accounted for by a model of strong selective pressure that would be expected if sickling was carried out regularly, as modelled by Hillman and Davies (1990). Thus, it would appear that early cultivators continued to employ the time-efficient harvesting methods associated with hunter-gatherers. Once cultivated, and populations had noticeably large proportions (majorities) of non-shattering types, then the transfer of the sickle technology to agriculture may have been seen as an obvious enhancement. In evolutionary terms the sickle is thus an ‘exaptation’ (sensuGould and Vrba, 1982), in that it developed for some other purpose, and was later transferred to crop-harvesting of already domesticated crops.
I would propose alternative explanations for the selection of domesticated-type crops that can account for the slow creep towards domestication. As others have noted, the harvesting of cereals when green, i.e. immature, regardless of technique, will not select for domesticated types (Hillman and Davies, 1990; Willcox, 1999). Harvesting green, however, may not provide full returns from a given stand of crops, as additional seeds (including late tillers) may form and approach maturation subsequent to the harvest. For the early farmers, who have invested significant labour into a restricted unit of land, it becomes important to maximize returns from that unit of land (as noted by Hillman and Davies, 1990: 69; Bar-Yosef, 1998). This may encourage multiple episodes of harvest. Later harvestings, whether by plucking or beating, will encounter domesticated genotypes in a higher frequency than earlier harvests. If, as an aspect of random variation, some farming households choose to store the late harvest as seed for sowing the following year, those fields so sown will start an increase in the domesticated type. Other households, however, may store for sowing their earlier harvests. Therefore, taken at the level of a human community, or on a regional scale, there might be only a very small proportion of sown crop that had some selection for the domesticated type. Such a model might therefore account for significantly longer periods involved in the fixation of non-shattering types in cultivated populations. By contrast, every farmer and every sown population would be under selective pressure to germinate rapidly, leading to seed size increase and loss of germination inhibitors. Similarly, natural selection for dispersal aids such as awns will be uniformly reduced. Thus, we should expect these ‘semi-domestication’ traits to evolve more rapidly.
Domestication as an interdisciplinary study of evolution
Domestication in plants is not one thing, nor has it been one uniform process. While there are recurrent parallels, due to the same selective pressures of cultivation, different domestication traits have evolved at different rates and these have varied markedly across families, such as between cereals and legumes. Further archaeobotanical research will help to pin down the actual rates at which different domesticates evolved, and needs to be expanded to address a larger range of species. The archaeological record also provides insights into what people are doing during this evolutionary process in terms of their technologies and ecological adaptations. Understanding past domestications is an exciting area of interdisciplinary investigation, between archaeologists and plant scientists, which may offer insights relevant to future directions in the evolution of crops under human manipulation.
ACKNOWLEDGEMENTS
These ideas have benefited from discussions with Gordon Hillman, Sue Colledge and David Harris. I must thank Emma Harvey, Ling Qin, Mervyn Jupe, Ruth Pelling and Sue Colledge for assistance in compiling some of the data sets. I thank David Harris, Sue Colledge, Mary Ann Murray and Chris Stevens for their comments on a draft of this paper, and the input of the peer-reviewers. I thank the OECD for financial support to participate in the workshop on ‘Domestication, Super-Domestication and Gigantism’. Funding to pay the Open Access publication charges for this article was provided by the OECD.
Comments
Fuller provides a detailed discussion of the domestication events in Africa, East and South Asia. His discussion on pearl millet domestication in Africa and India was very interesting.
Millet cultivation as noted by Fuller originated in Africa. The major grain exploited by Saharan populations was rice ,the yam and pennisetum. Arcaheologists believe that African millets were first cultivated in the savanna region of Africa or around Lake Chad.
Millet was early collected by hunter gather groups in Africa . Millet has been found at various sites in Africa dating back to: 7000 BC at Fayum; 4500-3300 BC at Tenerean and 3310 BC at Kadero (Winters, 2000). McIntosh and McIntosh (1988) has shown that the principal domesticate in the southern Sahara was bulrush millet (pennisetum). Millet impressions have been found on Mande ceramics from both Karkarchinkat in the Tilemsi Valley of Mali, and Dar Tichitt in Mauritania between 4000 and 3000 BP. (McIntosh & McIntosh 1983a,1988; Winters 2000,2007; Andah 1981)
India has several native millets, but the major millets cultivated in India are of African origin (Fuller et al, 2004). In India millets have been found at Harappan sites dating to the 3rd millennium BC (Fuller et al,2004; Winters, 2000) and also in South India. Weber (1998) claims that during Harappan times African millets were integrated into the South Asian subsistence pattern. It is interesting to note that where millet was cultivated in ancient India and Africa, we also see the cultivation of rice.
Some researchers, e.g., Fuller et al (2004) believe that millet may have come to India from East Africa. The only problem with this theory is that Wiegboldus (1996) found no evidence of millet and bicolour sorghum being cultivated in East African countries until late antiquity, millennia after African millets were being cultivated in the Sahara, West Africa and at Harappan sites.
Sergent (1999) has argued that the Dravidian speaking people originated in Africa. The Dravidian languages are genetically related to African languages spoken in West Africa including the Mande group(Aravanan,1976,1979; Upadhyaya & Upadhyaya, 1976,1979; Winters, 1980,1994 ).
In relation to millet cultivation in Africa and India it is interesting to note that African and Dravidian languages share similar terms for millet, and the Paleo-Dravido-African terms for millet was *sona, *kora and *tena (Winters, 2000). The linguistic evidence indicate that Africans and Dravidians used hoes to cultivate millet. The Paleo- Dravido-African term for hoe was probably *ba(r)/ pa (r ) (Winters,2000) .
The presence of millet cultivation in Africa and its later appearance in India may indicate that Dravidians took millet cultivation to India sometime after 3000 BC. Winters (2007) claims that molecular, linguistic, osteological, anthropological and archaeological evidence supports an African origin for Dravidian speakers.
References:
Andah, B Wai. (1981). West Africa before the Seventh Century", In General History of Africa, Vol 2, Paris: UNESCO.
Aravanan,K.P. (1976). Physical and cultural similarities between Dravidians and Africans", Journal of Tamil Studies 10, 23-27.
Aravanan, K. P. (1979). Dravidians and Africans. Madras.
Fuller, D, et al. Early plant domestication in southern india:some preliminary archaeobotanical results. Veget. Hist. Archaeobot, 13 (2004), 115-129.
McIntosh, S K & McIntosh, R.J. (1986). Archaeological Research and date from West Africa, Journal of African History 27, 413-442.
McIntosh, S K & McIntosh, R.J. (1988). From Sciecles obscurs to revolutionary centuries on the Middle Niger, World Archaeology ,20 (1), 141-165.
Sergent , Bernard (1992). Genèse de L'Inde. Paris: Payot .
Upadhyaya,P & Upadhyaya,S.P.(1979).Les liens entre Kerala et l"Afrique tels qu'ils resosortent des survivances culturelles et linguistiques, Bulletin de L'IFAN, no.1, 1979, pp.100-132.
Upadhyaya,P & Upadhyaya,S.P.(1976). Affinites ethno-linguistiques entre Dravidiens et les Negro-Africain, Bull.de L’IFAN,No.1, 1976,pp.127- 157.
Weber, S.A.(1998). Out of Africa: The initial impact of millets in South Asia. Current Anthropology, 39(2), 267-274.
Wigboldus,J.S. (1996). Early presence of African millets near the Indian Ocean. In J. Reade, The Indian Ocean (pp.75-86), London: The British Museum.
Winters, C. (1980). "The genetic unity of Dravidian and African languages and culture",Proceedings of the First International Symposium on Asian Studies (PIISAS) 1979, Hong Kong: Asian Research Service.
Winters, C. (1994). The Dravidian and African languages, International Journal of Dravidian Linguistics, 23 (2), 34-52.
Winters, Clyde Ahmad.(2000). Proto-Dravidian agricultural terms. International Journal of Dravidian Linguistics, 30 (1), 23-28. Winters, C.( May 2007). Did the Dravidian Speakers Originate in Africa? BioEssays,27(5):497-498.
Conflict of Interest:
None declared