Tag: geology

  • Underwater bridge gives clues to ancient human arrival

    Underwater bridge gives clues to ancient human arrival

    [ad_1]

    Mallorca is the largest of the Balearic islands and the sixth-largest island in the Mediterranean Sea, but despite its size and location research suggests that it was among the last Mediterranean islands to be settled by humans. But exactly when people arrived on the island is a subject of much debate, with current estimates placing it at around 4,400 years ago.

    However, an ancient stone bridge in a flooded cave may call that timeline into question. By dating mineral deposits in the cave scientists have given a new window for when they suggest humans actually reached the island — at least 1,000 years earlier than previously thought.

    Submerged bridge constructed at least 5600 years ago indicates early human arrival in Mallorca, Spain

    Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

    [ad_2]

    Source link

  • why it matters even without a formal geological definition

    why it matters even without a formal geological definition

    [ad_1]

    On 5 March 2024, the International Commission on Stratigraphy (ICS) — the body responsible for defining units of geological time — announced it was rejecting a proposal to formalize the Anthropocene as a geological epoch that represents an interval of overwhelming human impact on the planet. The Subcommission on Quaternary Stratigraphy (SQS) of the ICS had initiated this process in 2009 by setting up an Anthropocene working group (AWG), which we represent. The aim of the AWG was to clarify whether there was sufficient evidence to formalize the Anthropocene, a process that involves identifying a precise starting point in a specific geological layer, or stratum.

    The rejection has prompted much debate, with strong views expressed on both sides. In the past decade or so, however, the term Anthropocene has been adopted widely to describe, analyse and interpret the transformed conditions in which humans now live.

    It’s currently used in four main ways by different groups. First, the Earth-system science community, in which the concept arose, and allied scientific disciplines use it to model, assess and warn of the effects of human activities, including the transgression of environmental ‘planetary boundaries’1. Second, scholars in the humanities and social sciences use it to seek to understand how human impacts eventually came to overwhelm many powerful forces of nature, and what that means to the analysis of history, philosophy, politics, economics, society and culture2. Third, the Anthropocene is inspiring many works in museums and in the arts. And fourth, the public and policymakers, urban planners and others use the concept to understand the human transformation of the climate and biosphere, which is essential to formulating and implementing policies of stewardship, mitigation and adaptation1.

    With a formal geological definition of the Anthropocene now off the table, at least for the moment, we here explore how the concept can be best understood and used with these wider communities in mind. What should the term fundamentally mean, for both specialized and general use?

    Geological origins

    The Anthropocene was initially proposed by atmospheric chemist Paul Crutzen in 2000, at a meeting of the Scientific Committee of the International Geosphere–Biosphere Programme (IGBP), a forum dedicated to discussing processes of global change. Crutzen’s intention was for it to represent a new geological epoch3 consistent with the goals of this community. The purpose was not simply to denote an anthropogenically modified Earth. Geologically important anthropogenic impacts stretch back through the Holocene epoch, the post-ice-age chunk of geological time in which we still formally live, and into the Pleistocene epoch that preceded it. Conditions typical of the Holocene include relatively stable atmospheric and ocean chemistries and climate (especially temperature) and, after around 7,000 years ago, a relatively constant sea level. As proposed by Crutzen, the Anthropocene represents an Earth system that has changed irreversibly from those conditions to a state that is still evolving, for which the name Holocene could no longer be regarded as appropriate.

    Strikingly similar patterns of various environmental markers, such as levels of different greenhouse gases, bear witness to an abrupt transition, approximating to the change from a horizontal to a vertical line on a graph of the extent of the Holocene (see Supplementary information, Fig. S1). Crutzen initially suggested that departure from Holocene conditions began with the start of the Industrial Revolution and increased coal burning in late eighteenth-century Europe4, although he proposed this before the IGBP data extended that far back. Once further data had come in, a mid-twentieth-century onset was more evident4, linked to the concept of the ‘Great Acceleration’ of many socio-economic drivers and Earth-system responses after the Second World War5.

    The transformation this represents has been extensively detailed68. Among its main characteristics are: altered atmospheric chemistry; a warming climate; now-irreversible ice-sheet melting and sea-level rise; accelerated erosion and sedimentation; a proliferation of industrial goods, many made of artificial materials such as plastics; a biosphere transformed through species invasions, domestications and extinctions; and the rapid growth of a ‘technosphere’ of globally interlinked human-devised technological systems9.

    Background to the proposal

    This initial research propelled efforts to pin down the beginning of the Anthropocene, by identifying its start in a geological reference layer known as a global boundary stratotype section and point (GSSP; often called a golden spike). Between 2020 and 2023, 12 research teams formulated proposals for candidate GSSPs and other reference sections in eight distinct geological environments across five continents6.

    After much discussion and formal voting, the AWG chose a level that separates the summer and autumn sediment layers laid down in 1952 at Crawford Lake in Canada. The autumn layer is characterized by a marked upturn in plutonium isotopes, coinciding with the first atmospheric hydrogen-bomb test10. This signal is clearly seen in many of the proposed sites (see ‘Consistent boundary’). Crawford Lake was selected because of its undisturbed, seasonally deposited sediment layers that preserve a precise and continuous chronology, its ease of access for future investigations and its protected status in a conservation area. The annually resolved plutonium data are supported by fly ash, nitrogen-isotope and biological markers. To give a specific date and time, a nominal start that coincides with the first atmospheric hydrogen-bomb detonation (codenamed Ivy Mike) was chosen: 1 November 1952 at 7:15 local time at the site on Enewetak Atoll, part of the Marshall Islands, in the Pacific Ocean (19:15 Greenwich Mean Time on 31 October).

    Consistent boundary. A line chart showing different radiation levels between 1840 and 2020 for 6 different lakes across the world. Distinctly elevated levels of radiation are recorded after the first full scale test of a hydrogen-bomb in 1952.

    Source: Ref. 7

    These strata can be precisely correlated around the world — in some places to the nearest year — by a plethora of stratigraphic signals6, enabling a systematic quantitative comparison of processes before and after the time boundary represented by their deposition. The proposal was formally submitted7 by the working group to the SQS on 31 October 2023.

    The Anthropocene’s extent

    The idea behind defining the Anthropocene within the geological timescale was to provide a precise reference point for the integrated study of a wide variety of phenomena as outlined above, placing contemporary changes in a deep-time context. But it is the lived, experienced and observationally recorded phenomena that go beyond geology and produce the intense broader interest in the Anthropocene: a fully legitimate interest, because the original guiding concept of the Anthropocene addresses the conditions of Earth’s habitability.

    During the Anthropocene, Earth’s surface conditions have changed substantially compared with those prevailing throughout most of the Holocene: the planet is now hotter, more contaminated and biologically more degraded. These negative trends are set to intensify and extend further outside the Holocene envelope1. Some of the changes involved are long lasting (such as climate change) and some are irreversible (such as extinctions). They are already exerting pressure on political institutions, legal frameworks and economic relations, all of which are meant to protect human communities and give them meaning.

    A plume of smoke is seen in the air as nuclear device Ivy Mike is detonated in the Marshall Islands.

    The detonation of the first hydrogen bomb, codenamed ‘Ivy Mike’, in 1952 marked the proposed beginning of the Anthropocene epoch.Credit: Bettmann/Getty

    A precise geological definition of the year, day and hour is often not so relevant when the Anthropocene is discussed in these wider contexts. We note also that modest changes in formal boundaries of older geological time units do not generally result in a difference in how they are fundamentally understood. For instance, in 2008, the definition of the Holocene was changed by a different SQS working group from beginning at 10,000 radiocarbon years before present to a formally, stratigraphically defined 11,700 years before present (taken as 2000)11, without changing its fundamental meaning as the most-recent post-glacial interglacial phase.

    The definition of the Quaternary period is also informative. This unit encompasses the Pleistocene and Holocene epochs, and was set in 2009 to begin at around 2.6 million years ago, using for practical purposes a pre-existing GSSP and a major reversal of Earth’s magnetic field. Intensification of Northern Hemisphere glaciation had in fact begun slightly earlier, at around 2.7 million years ago, but this does not change the period’s general meaning as representing the commonly considered ‘ice age’12. Other such examples can be found for older time periods. It isn’t the precise boundary that controls the concept of geological time units, but the fundamental characteristics of the periods that they bound. Nevertheless, increasing the precision of their boundaries makes geological time units more consistently useful.

    We argue here that an understanding of the Anthropocene as the result of a mid-twentieth-century planetary transformation remains broadly useful across disciplines. This period is closely associated with the beginning of the Great Acceleration — a term coined by US historian John McNeill — and its near-synonyms, such as the ‘post-Second World War economic boom’, ‘the Japanese economic miracle’ from 1946 to 1990s, and Les Trentes Glorieuses, a term describing France’s 30 years of uninterrupted economic growth from 1945 to 1975. Many indicators of human impacts — including greenhouse-gas emissions, metal and mineral production, meat consumption and plastic use — show strong upward trends from the middle of the last century (see ‘Turning point’).

    Turning point. Six area charts showing the rise of the global population, carbon emissions, iron and steel, plastic, waste and meat production, between 1800 and 2010. Noticeable rises are recorded after the Second World War.

    Sources: CO2: Our World in Data (go.nature.com/3tab6kt); others: J. Zalasiewicz et al.

    For historians, this post-war period is characterized by a far-reaching transformation of societal values in many parts of the world, including a spread of socialism, communism, liberal democracy, social-welfare programmes and women’s education. These changes were powered by growth in the globalization of industry, trade and commerce in almost all sectors. National and international institutions in both communist and liberal-democratic countries guided these transitions even as these two blocs contended for power. Institutions such as the International Monetary Fund, the World Bank and the precursor of the World Trade Organization were created through international agreements near the end of or shortly after the Second World War. Technological advances also saw an explosion in agricultural food production and contributed to high rates of human population growth globally2,13.

    For researchers in anthropology, political theory, international law and ethics, questions arise about the implications of the human forces that start to dominate the web of life and non-organic processes during this interval. Around the world, people are contending with a transformed Earth system, which different cultures experience, understand and respond to according to their distinct world views. The expanding technosphere necessary to power, feed, house and clothe the growing human population has been accompanied by rising global inequality, with the poorest people having seen only a minuscule rise in real incomes. Neoclassical economics and its assumptions of an unlimited capacity for growth are also challenged by an understanding of an increasingly destabilized Earth system and finite planet2.

    Older boundary levels have been suggested for the Anthropocene’s beginning, but we argue that they do not capture the fundamental step change, measurable across a wide range of metrics, that a mid-twentieth-century transition does. Alternative suggestions include an ‘Orbis spike’ level at around 1610, which corresponds to a dip of around 10 parts per million (p.p.m.) in atmospheric carbon dioxide concentrations14. This dip has been proposed to result from a decline in population and reduction of farming and forest regrowth in the Americas after mass deaths of Indigenous peoples following the arrival of European colonists. But this dip is small and short-lived compared with the increase in CO2 of around 140 p.p.m. over the past two centuries, which is set to endure. And stratigraphic signals related to a ‘Columbian exchange’ in species between the Americas and Europe — such as the presence of maize (corn) pollen — occur at distinct times in different places over several centuries. They do not capture an abrupt, fundamental transition globally on a par with that seen in the mid-twentieth century.

    Piles of rubbish are seen washed up on a beach

    The production plastics and other waste has increased hugely since the 1950s.Credit: Jason Swain/Getty

    Similar objections can be raised against other boundary suggestions based on stratigraphic signals — for example, lead-smelting signals dating to around 3,000 years ago found in European peat bogs and Greenland ice15. Some proposed ‘Anthropocenes’ extend yet further back in time, including an ‘Anthropocene event’ that includes all major preserved human impacts at least as far back as 50,000 years ago — a definition that would encompass the Parthenon in Ancient Greece, the Great Wall of China, the pyramids of Egypt, early deforestation, Mesolithic arrowheads and even the Late Pleistocene megafaunal extinctions16.

    The recognition of a profound planetary transition in the mid-twentieth century would be strengthened by geological formalization. But even recognizing it as a quasi-formal boundary reflects reality17,18 and encourages clear communication in all disciplines in which the term Anthropocene has come to be used as shorthand for overwhelming environmental change. Interpretations that encompass all significant anthropogenic impacts over time differ markedly and, if all are labelled as Anthropocene, risk avoidable confusion of meaning.

    What the Anthropocene is and isn’t

    Beyond discussions about when it can most usefully be considered to have begun, the Anthropocene has been interpreted in many ways by the various disciplines in which it has circulated. Questions commonly reflect increasingly divergent perspectives, and diminishing mutual understandings, in our still strongly siloed academic landscape. These differences need to be explored and, when necessary, challenged.

    Does the Anthropocene disregard sociopolitical inequalities? In coining and using the word Anthropocene, Earth-system scientists and geologists are said by some to be assigning blame equally to all humans, rather than just to those whose disproportionate consumption of resources is mainly behind the altered (and still changing) planetary state.

    This misconception has arisen because the aims and procedures of Anthropocene physical science differ from those of the humanities and social sciences. The physical sciences are here concerned mostly with measuring and describing Earth’s responses to impacts that are currently overwhelmingly anthropogenic. Researchers are not typically interested in ascribing responsibility to particular people or to specific social, economic and political systems — although a strongly unequal responsibility for anthropogenic change has been noted ever since the concept’s introduction2 and some studies5 include such correlations. The physical sciences also rarely explore the resulting social, economic and political responses or the values that underlie people’s desires and hopes.

    In approaches to the Anthropocene, there is thus a division, or spectrum, of disciplinary labour. Physical scientists study Earth’s responses to human impacts during the Anthropocene, whereas social scientists and humanities scholars explore the people and societies behind those impacts. For most scholars in the humanities and social sciences, inequality is central to sociopolitical analyses of the Anthropocene. There is no reason for these approaches to be in opposition; the Anthropocene as understood here provides a framework that implies complementarity and multidisciplinarity.

    Does the Anthropocene equate to climate change? Rapid, recent climate change caused by rising atmospheric greenhouse-gas levels poses a clear threat to human societies. Despite efforts to control emissions, more than 100 million tonnes of CO2 are added to Earth’s atmosphere daily. Although climate change is now the most important force destabilizing the Earth system, the Anthropocene includes many other physical, chemical and biological transformations, interlinked with global economic, political, social and technological phenomena.

    A birds eye view of an iron ore quarry

    Expanding activities such as mining industries make their mark on the planet.Credit: Anton Petrus/Getty

    When Crutzen introduced the term in 2000, atmospheric CO2 levels were ‘only’ around 370 p.p.m., or about 85 p.p.m. above pre-industrial maximum concentrations. Average global temperatures were some 0.5 °C above pre-industrial levels (taken as the average from 1850 to 1900), and so still within the envelope of conditions reached at other times during the Holocene. In 2000, warming might have been said to be incipient: but even then, the total changes to the Earth system justified Crutzen’s proposal of a new epoch. By 2022, atmospheric CO2 levels were nearly 420 p.p.m., with an average temperature of 1.5 °C above pre-industrial values. Factoring in the effects of other greenhouse gases, notably methane, nitrous oxide and chlorofluorocarbons, brings the CO2 equivalent to around 523 p.p.m. in 2022, a level perhaps not seen since the mid-Miocene epoch, some 17 million years ago. Not surprisingly, then, Earth overall is now hotter than at any time in the Holocene. Meanwhile, biodiversity loss and the increasing homogenization of the planet’s once-distinct biogeographical assemblages make up another key aspect of the Anthropocene19. Climate change is an important component of the Anthropocene, but it does not define it.

    Did the Anthropocene begin when its causes did? The boundaries of geological epochs are not generally taken at the beginnings of planetary transitions, but at points at which they can be readily recognized and practically used. Many evolving developments, activities and ideas ultimately led to the transformation of the Earth system in the mid-twentieth century. They reach back to the emergence of Homo sapiens and the mastery of fire and complex communication skills, across the development of animal domestication, agriculture, urban societies, writing systems, globalized trade, the steam engine, capitalism, the Haber–Bosch process for fertilizer production and so on. The causes of the Anthropocene necessarily precede the start of the epoch. By analogy, the formal definition of the Holocene at 11,700 years ago comes towards the end of a long, complex, stepped pattern of warming and sea-level rise caused by deglaciation that had started around 8,000 years earlier10. Subsequently, the Holocene, operating as an interglacial interval not greatly different from previous ones, provided the physical circumstances for civilization to develop, conditions that the Anthropocene is now overriding.

    A common sense

    The Anthropocene was originally understood by Crutzen as not only representing humanity’s influence on Earth’s geological record (he was well aware of earlier anthropogenic impacts), but also reflecting a system with physical characteristics that had, since widespread industrialization, departed from the prolonged, relatively stable conditions of the Holocene.

    An Anthropocene concept anchored to begin in the mid-twentieth century is aligned with both the Great Acceleration and a fundamental shift in Earth’s state. Understanding the Anthropocene in this way would prevent the current confusion of the term meaning different things in different contexts. It complies with the term’s originally intended meaning, and also reflects a clear evidence-based geological signature20. The concept is congruent with the term’s use in Earth-system science21 and more widely, such as by new and emerging institutions, such as the Center for Anthropocene Studies at the Korea Advanced Institute of Science & Technology, Daejeon, South Korea, the Centre of Excellence for Anthropocene History at Stockholm’s KTH Royal Institute of Technology and the Max Planck Institute of Geoanthropology in Jena, Germany. It highlights geology’s role in addressing problems of societal concern and is also applicable in the social sciences and humanities with respect to the enormous societal upheavals, changes in energy production and globalization of trade that have taken place. Policy and international law will also benefit from an unambiguous definition, putting beyond doubt that we are now in a time of transformed planetary functioning wrought by overwhelming human impacts.

    [ad_2]

    Source link

  • When did life on Earth begin? Surprisingly early in our planet’s history

    When did life on Earth begin? Surprisingly early in our planet’s history

    [ad_1]

    A colourised, microscopic image of a 4.4 billion-year-old zircon

    The oldest known piece of Earth? A 4.4 billion-year-old zircon

    John Valley, University of Wisconsin-Madison

    Until recently, many discounted the idea that life could have existed on Earth before 3.8 billion years ago because it was thought that heavy pummelling from asteroids would have made this impossible. But several lines of evidence are pointing to an earlier origin of life, and as we begin to question whether the late heavy bombardment really happened at all, it’s beginning to look like life started surprisingly early in our planet’s history.

    The earliest fossil evidence – around 3.5 billion years ago

    Although…

    [ad_2]

    Source link

  • India’s pioneering mission bolsters idea that Moon’s surface was molten

    India’s pioneering mission bolsters idea that Moon’s surface was molten

    [ad_1]

    An image of the waxing Moon.

    The Moon probably originated from material scattered into space when a large impactor struck the newly formed Earth.Credit: David Gannon/AFP via Getty

    India’s Chandrayaan-3 mission has obtained the first measurements of the composition of the soil near the Moon’s south pole1. The minerals found offer further evidence that the lunar surface was entirely molten shortly after the Moon formed.

    Chandrayaan-3’s Vikram lander touched down on 23 August 2023. It released a rover called Pragyan, which collected data ranging from temperature to seismological measurements over 10 days.

    Pragyan also studied the chemical composition of the regolith: the fine material that covers much of the lunar surface. The rover stopped and deployed an instrument called an alpha-particle X-ray spectrometer (APXS) 23 times.

    Santosh Vadawale, an X-ray astronomer at the Physical Research Laboratory in Ahmedabad, India, and his colleagues analysed radiation data collected by the APXS, and used this information to identify the elements in the regolith and their relative abundances, which, in turn, revealed the soil’s mineral composition. The team found that all 23 samples comprised mainly ferroan anorthosite, a mineral that is common on the Moon. The results were reported in Nature today.

    “It’s sort of what we expected to be there based on orbital data, but the ground truth is always really good to get,” says Lindy Elkins-Tanton, a planetary scientist at Arizona State University in Tempe.

    Previous landers obtained similar results. However, the Chandrayaan-3 samples are the first from the subpolar region: previous landers visited equatorial and mid-latitude zones. Together, this suggests that the composition of the regolith is uniform across the Moon’s surface.

    Vadawale says that this is direct confirmation that the lunar surface was a molten magma ocean immediately after it formed. The lunar magma ocean theory was first proposed by two independent groups in 1970, after rock collected during the 1969 Apollo 11 landing was analysed.

    Moon’s origin

    The best model for the origin of the Moon is that the newly formed Earth was struck by a large impactor, called Theia, which vaporized the planet’s surface and blasted a large amount of material into orbit. The scattered material swiftly accreted to form the Moon. This impact theory explains why lunar rocks have an isotope composition similar to those on Earth.

    The material that formed the Moon had a lot of energy, which had to be dissipated. It escaped in the form of heat and, as a result, the young Moon’s surface melted into a magma ocean. Dense mafic rocks, rich in metals such as magnesium, sank into the Moon’s interior. Lighter rocks, including anorthosite, floated to the top, forming highlands similar to those visited by Chandrayaan-3.

    “It gives more support to the lunar magma ocean hypothesis,” says Mahesh Anand, a planetary scientist at the Open University in Milton Keynes, UK.

    Vadawale and his colleagues found that their samples contained elevated levels of magnesium compared with those of calcium. This suggests that deeper mafic material has been mixed into the regolith.

    The researchers attribute this to the events that formed a huge crater called the South Pole–Aitken basin, the rim of which is 350 kilometres from Chandrayaan-3’s landing site. “When such a large impact basin forms, it is supposed to excavate some deeper material,” says Vadawale, because the impactor drives deep into the crust. This deeper, magnesium-rich material would have been scattered over a huge area, slightly altering the make-up of the regolith Pragyan sampled.

    But one problem with that idea is that the South Pole–Aitken basin seems to be dominated by a mineral called pyroxene, which doesn’t quite fit Pragyan’s data, says Anand. Resolving this will probably require samples to be brought back to Earth, he says.

    The next Chandrayaan mission, which is in an early phase of development, intends to do just that.

    “To me, it’s a story about the success of the Indian space programme,” says Elkins-Tanton.

    [ad_2]

    Source link

  • These labs have prepared for a big earthquake — will it be enough?

    These labs have prepared for a big earthquake — will it be enough?

    [ad_1]

    Earlier this month, Japan’s Meteorological Agency issued its first-ever ‘megaquake’ alert, advising that the risk of a large earthquake along the Pacific coast was higher than usual. The warning came after an earthquake with a magnitude of 7.1 on 8 August.

    The agency lifted the warning a week later, after no major change in seismic activity was detected. But the alert was another reminder for scientists who live in Japan and other seismic zones of the constant threat that an earthquake could disrupt — or even destroy — their research. So how do they safeguard their laboratories? Nature spoke to seven researchers about their preparations and whether those are enough.

    Securing equipment

    When the Tōhoku earthquake and tsunami hit in March 2011, Masahiro Terada, an organic chemist at Tohoku University in Sendai, found broken glass scattered across his lab, fume hoods weighing 400 kilograms metres away from their usual position and water from broken pipes flooding the space. The smell of organic solvents filled the lab and a fire had broken out in the reagent storage room. Terada lost ten years’ worth of synthesized compounds.

    These days, Terada anchors large furniture and equipment directly to the concrete wall and stores reagents in cushioned mesh containers.

    Each year, biochemist Hideki Tatsukawa is securing more and more of his lab’s equipment at Nagoya University, under the institute’s guidance. The university is located in a region that has a more than 70% likelihood of a severe earthquake in the next 30 years, according to the Japanese government. Tatsukawa anchors any equipment taller than one metre, such as refrigerators, with vertical bands to the floor to prevent them from toppling or jumping during a quake.

    Tying down equipment is crucial for saving lives and preventing secondary disasters, such as broken gas pipes or exposed electrical wiring that could spark a fire, says Koji Fukuoka, a risk-management researcher formerly at Kyushu University in Fukuoka, Japan. Fires only take two minutes to reach the ceiling in most Japanese buildings, he says, so “removing potential causes of fire needs to be one of the top priorities in a lab setting”. Fukuoka recommends that labs have two evacuation routes in case one of them becomes compromised.

    Damage to equipment during earthquakes can also result in considerable financial losses. During the 2011 quake, damage to research instruments cost Tohoku University 26.9 billion yen (US$180 million). In the wake of that earthquake, the university established a Disaster Management Promotion Office, which issues technical guidelines on how to secure equipment depending which floor of the building they are on. For instance, nuclear magnetic resonance (NMR) spectroscopy instruments should be installed on the ground floor and on top of a base isolation stand, which isolates the equipment from the floor so that it moves independently of the shaking ground. NMR instruments can explode because of the helium liquid they contain becomes a gas when the equipment is broken and might deplete rooms of oxygen.

    “But, to our knowledge, these learnings haven’t been shared across universities systematically,” says Takeshi Sato, a disaster-prevention scientist at Tohoku University. Fukuoka also notes that, without expert advice and dissemination of knowledge, each lab’s precautions might not be enough in the event of very strong shaking.

    Backing up samples

    One of the main concerns for Kentaro Noma, a neurobiologist at Nagoya University, is losing the more than 600 unique strains of nematode worm (Caenorhabditis elegans) that he has produced over the course of his career so he could study the relationship between genetics and the ageing of neurons. “Losing the strains not only compromises my own work, but research reproducibility for the wider scientific community,” he says.

    In addition to the stocks that Noma currently uses for his research, he maintains two backup collections: one in a freezer cooled to −80 °C kept in his lab and another stored in liquid nitrogen, also in the lab. The freezer has a backup power generator that runs on gasoline; the collection stored in liquid nitrogen serves as an extra safeguard in case of an extreme disaster, when there is no access to fuel. “It’s not perfect, but the liquid-nitrogen freezer buys us an extra 1–2 weeks to devise longer-term measures,” he says.

    Tatsukawa, who studies the functions of proteins in model organisms, preserves genetically engineered lines of mice and medaka fish (Oryzias latipes) by extracting sperm, mixing the samples with a preservation solution and freezing them in liquid nitrogen. The cryogenically preserved samples can be thawed, and female animals can be artificially inseminated to restart the line.

    Similar precautions are being taken by scientists at the University of California in the San Francisco Bay Area, which sits directly on top of the Hayward Fault. There is a more than 30% chance of an earthquake with a magnitude of 6.7 or higher occurring on the fault iby 2043.

    Dirk Hockemeyer, a cell biologist at the University of California, Berkeley, also cryogenically preserves his stem-cell lines in liquid nitrogen, a standard procedure in his field. He has more than 25,000 vials of cell lines produced by the 50 researchers that have worked in his lab over the past 10 years. As a preventative measure, Hockemeyer keeps duplicates of valuable cell lines in liquid nitrogen in different buildings in case one collapses.

    Research animals

    For scientists who work with animals, there are many factors to consider in earthquake preparation. In Japan, facilities with primates typically have two-tiered walls so that if one layer is destroyed, the other keeps the animals contained, says Ikuma Adachi, a primatologist at Kyoto University in Inuyama. Kyoto University’s Center for Human Evolution Modeling Research houses 11 chimpanzees (Pan troglodytes) and 800 macaques (Macaca sp.). “Primates are very sensitive to changes in the environment and will become anxious during disasters,” he says. Securing water for them to drink and maintaining hygienic conditions for the animals to live in is also crucial, says Adachi.

    “The best we can do is to prepare measures and protocols in advance so that it guides decision-making during emotionally challenging times,” he says.

    [ad_2]

    Source link

  • Dinosaur-killing Chicxulub asteroid formed in Solar System’s outer reaches

    Dinosaur-killing Chicxulub asteroid formed in Solar System’s outer reaches

    [ad_1]

    Artist impression of a large asteroid hitting Earth.

    The impact from the Chicxulub asteroid (illustration) caused a mass extinction 66 million years ago.Credit: Illustration by Mark Garlick

    The object that smashed into Earth and kick-started the extinction that wiped out almost all dinosaurs 66 million years ago was an asteroid that originally formed beyond the orbit of Jupiter, according to geochemical evidence from the impact site in Chicxulub, Mexico.

    The findings, published on 15 August in Science1, suggest that the mass extinction was the result of a train of events that began during the birth of the Solar System. Scientists had long suspected that the Chicxulub impactor, as it is known, was an asteroid from the outer Solar System, and these observations bolster the case.

    The Cretaceous/Palaeogene (K/Pg) extinction was the fifth in a series of mass extinctions that have occurred during the past 540 million years or so: the period in which animals have spread around Earth. The event wiped out more than 60% of species, including all non-avian dinosaurs.

    Since 1980, evidence has accumulated that the extinction was caused by a city-sized object hitting Earth. Such an impact would have thrown huge volumes of sulfur, dust and soot into the air, partially blocking out the Sun and causing temperatures to plummet. A layer of iridium metal, which is rare on Earth but more common in asteroids, was deposited all over the planet around the time the extinction began. And in the 1990s, scientists described2 the impact site, a huge buried crater near Chicxulub on Mexico’s Yucatán Peninsula.

    “We wanted to identify the origin of this impactor,” says Mario Fischer-Gödde, an isotope geochemist at the University of Cologne in Germany. To find out what the object was and where it came from, he and his colleagues obtained samples of K/Pg rocks from three sites, and compared them with rocks from eight other impact sites from the past 3.5 billion years.

    Ruthenium signature

    The team focused on isotopes of ruthenium metal. Ruthenium is extremely rare in Earth rocks, says Fischer-Gödde, so samples of it from an impact site offer “the pure signature” of the impactor. There are seven stable isotopes of ruthenium, and celestial bodies have characteristic blends of them.

    In particular, looking at ruthenium isotopes can help researchers to distinguish between asteroids that formed in the outer Solar System — beyond the orbit of Jupiter — and those with an origin in the inner Solar System. When the Solar System was forming from a molecular cloud around 4.5 billion years ago, temperatures in the inner region were too high for volatile chemicals such as water to condense. As a result, asteroids produced there had low levels of volatiles, and became rich in silicate minerals. Asteroids that formed further out became ‘carbonaceous’, containing lots of carbon and volatile chemicals. Ruthenium isotopes were unevenly distributed in the cloud, and this heterogeneity is preserved in asteroids.

    Fischer-Gödde’s team found that the ruthenium isotopes in the Chicxulub impactor were a good match for a carbonaceous asteroid from the outer Solar System, and did not match siliceous asteroids from the inner Solar System.

    Previous studies have also suggested that the impactor was a carbonaceous asteroid, says Sean Gulick, a geophysicist at the University of Texas at Austin. But the latest work “is a really elegant way to get at some of these same answers and get several of the same answers using one methodology”, he adds.

    Not a comet

    The ruthenium isotopes also provide evidence against another hypothesis: that the Chicxulub impactor was a comet rather than an asteroid. “The idea it was a comet goes back far into the literature,” says William Bottke, a planetary scientist at the Southwest Research Institute in Boulder, Colorado. The hypothesis was revived in a controversial 2021 study3, which argued that the impactor was part of a long-period comet that had broken up under the Sun’s gravitational pull.

    But Fischer-Gödde says the ruthenium-isotope data do not match a comet. Gulick agrees. He adds that geochemical evidence from the Chicxulub impact site has never been consistent with a comet, and the latest study “does a really good job of kind of nailing that home”.

    Bottke adds that the comet hypothesis also “runs into difficulty” when you consider the dynamics of the Solar System. “Sizeable carbonaceous asteroids are much more probable to hit the Earth than comets,” he says. In a 2021 study, he and his colleagues argued that the impactor probably came from the main asteroid belt, between Mars and Jupiter.

    Most of the other impactors that Fischer-Gödde’s team studied seem to have formed in the inner Solar System, according to their ruthenium isotopes. The only exceptions were the oldest ones, from between 3.2 billion and 3.5 billion years ago, which look more like the Chicxulub impactor. It could be that “something interesting was happening in the asteroid belt at that time, such as a large asteroid break-up in a good place to deliver objects to Earth”, says Bottke.

    [ad_2]

    Source link

  • Record-breaking drill core reaches 1.2 kilometres into Earth’s mantle

    Record-breaking drill core reaches 1.2 kilometres into Earth’s mantle

    [ad_1]

    A sample of rock from Earth’s mantle viewed under a microscope

    Johan Lissenberg

    In the middle of the North Atlantic Ocean, geologists have burrowed 1268 metres below the seafloor – the deepest hole drilled into Earth’s mantle yet. Analysis of the resulting rock core offers fresh clues about the evolution of our planet’s outermost layers, and perhaps even the origins of life.

    Earth is broadly made up of a few different layers, including a solid outer crust, an upper and lower mantle and a core. The upper mantle, which sits just below the crust, is composed primarily of a magnesium-rich rock called peridotite. This layer drives key planetary processes such as earthquakes, the water cycle and the formation of volcanoes and mountains.

    “To date, we’ve only had access to fragments of the mantle,” says Johan Lissenberg at Cardiff University, UK. “But there are a number of places where the mantle is exposed on the seafloor.”

    One of these areas is an underwater mountain located called Atlantis Massif, located near a volcanically active region of the mid-Atlantic ridge. Continuously surfacing and melting parts of the mantle give rise to many of the volcanoes in the area. Meanwhile, as seawater seeps deeper into the mantle, the hotter temperatures heat it up and produce chemical compounds such as methane, which bubble back up through hydrothermal vents and provide fuel for microbial life.

    “There’s a kind of chemical kitchen in the subsurface of Atlantis Massif,” says Lissenberg.

    To learn more about this dynamic region, he and his colleagues initially planned to bore 200 metres into the mantle with the drilling ship JOIDES Resolution, deeper than researchers had ever managed so far.

    “Then we started drilling and things went amazingly well,” says team member Andrew McCaig at the University of Leeds, UK. “We recovered really long sections of continuous rocks and decided to stick with it and go as deep as we could.”

    Eventually, the team managed to dig 1268 metres down into the mantle.

    Upon analysing the drill core sample, the researchers found that it had much lower levels of a mineral called pyroxene compared with other mantle samples collected from around the world. That suggests this particular section of the mantle has undergone significant melting in the past, which has depleted the pyroxene, says Lissenberg.

    In the future, he hopes to reconstruct this melting process, which could help us understand how the mantle melts and how that molten rock migrates to the surface to feed oceanic volcanoes.

    Some scientists think life on Earth began in the depths of the ocean near hydrothermal vents. So, by examining the chemicals that appear along the cylindrical rock core, microbiologists are hoping to determine the conditions that may have led to life and how deep beneath the ocean floor they occurred.

    “It’s a very important drill hole because it’s going to be a reference section for scientists from many branches of science,” says McCaig.

    “A one-dimensional sample of the Earth cannot provide full information on the three-dimensional migration pathways of melt and water, but is nevertheless a major achievement,” says John Wheeler at the University of Liverpool, UK.

    Topics:

    [ad_2]

    Source link

  • Deepest-ever samples of rock from Earth’s mantle unveiled

    Deepest-ever samples of rock from Earth’s mantle unveiled

    [ad_1]

    Petrographic micrograph of a mantle rock core sample

    A sample of mantle rock viewed under a microscope.Credit: Johan Lissenberg

    A record-breaking expedition to drill into rocks at the bottom of the Atlantic Ocean has given scientists their best glimpse yet of what the Earth might look like underneath its crust.

    Researchers extracted an almost uninterrupted 1,268-metre long sample of green-marble-like rock from a region where Earth’s mantle — the thick, interior layer that makes up more than 80% of the planet’s bulk — has pushed up through the sea floor (see ‘Deep-sea drilling’). The samples, described on 8 August in Science1, offer unprecedented insights into processes that lead to the crust’s formation.

    “We had that story in our head” about what this kind of rock should look like, but it’s completely different when “you see it there on a table”, says Natsue Abe, a petrologist at the Japan Agency for Marine-Earth Science and Technology in Yokohama.

    The expedition’s achievements are a “fantastic landmark”, says Rosalind Coggon, a marine geologist at the University of Southampton, UK. “Ocean drilling provides the only access to samples of Earth’s deep interior that are key to understanding our planet’s formation and evolution.”

    Geoscientists worry that it will be a long time until they can follow up with more studies, because the decade-long International Ocean Discovery Program (IODP) is coming to an end, and the United States is retiring its workhorse research ship, JOIDES Resolution.

    Scientists in hard hats and face masks examine core samles in a lab on board JOIDES Resolution research vessel

    Researchers examine rock cores retrieved during an ocean-drilling expedition.Credit: Erick Bravo, IODP JRSO (CC BY 4.0)

    Oceanic crust — the type of crust found mainly underneath Earth’s seas, rather than its continents — is mostly made up of dense, volcanic rock called basalt. It is much thinner and younger than continental crust, because the rocks are recycled continually by the movements of tectonic plates.

    Basalt forms when magma pushes up through undersea cracks along formations called mid-oceanic ridges. The magma itself originates from a process called partial melting in the mantle — which is largely made up of translucent-green, magnesium-rich minerals. As material in the mantle rises, the pressure over it drops, which causes some of these minerals to melt and form microscopic films of magma between rock crystals.

    Usually, only magma erupts onto the sea floor. But at some sites, mantle rock also makes it to the surface, where it interacts with sea water in a reaction called serpentinization. This alters the rock’s structure — giving it a marble-like appearance — and releases various substances, including hydrogen.

    Easy to drill

    In May 2023, JOIDES Resolution was visiting a site where this has happened: an undersea mountain called the Atlantis Massif, located just west of the Atlantic’s mid-ocean ridge. The 143-metre-long ship is equipped with a 62-metre-tall crane for undersea drilling.

    The researchers on board chose to drill at Lost City, a site on the southern side of the massif. The region is punctuated with hydrothermal vents where extremophile microorganisms feed on the hydrogen that seeps out.

    “We had only planned to drill for 200 metres, because that was the deepest people had ever managed to drill in mantle rock,” says Johan Lissenberg, a petrologist at Cardiff University, UK. But the drilling was surprisingly easy and three times faster than usual, returning long, unbroken cylinders of rock called cores. “So, we just decided to keep going,” says Lissenberg. The team stopped only when the expedition was coming to its scheduled end.

    The researchers have now published their initial findings. “What we report is literally what you can do on the ship. A team of 30 scientists poring over the cores 24 hours a day for two months, and logging centimetre by centimetre as it’s coming up.”

    Deep-sea drilling: Diagram showing how researchers on a ship drilled into rock that originated in the Earth's mantle.

    When the scientists examined the structure of the rock in detail, they observed ‘oblique features’, a telltale signature of the prevailing theory of how magma separates from the mantle to become part of the crust, says Lissenberg. The mantle rock was also interspersed with other types of rock in the cores, suggesting that the mantle–crust boundary is not as sharp as seismographic data normally suggest, says Jessica Warren, a geochemist at the University of Delaware in Newark. Together, these results “are key to how we understand the formation of tectonic plates in the oceans”, she says.

    Uncertain future

    The trip capped a worthy four-decade career for the JOIDES Resolution, which the US National Science Foundation (NSF) had been renting from a private company. But the NSF has announced that it can no longer afford the US$72 million per year that it costs to run the ship after it fulfilled its IODP obligations, and that the programme would be discontinued. This leaves some scientists, especially those at early career stages, uncertain about the future of the field, says Aled Evans, a marine geologist at the University of Southampton.

    One remaining ‘grand challenge’ for geoscientists is to drill through the basaltic layer and across the boundary between crust and mantle — called the Mohorovičić discontinuity or ‘Moho’. This would allow them to access pristine mantle rock that hasn’t reacted with seawater. “We haven’t drilled into the real mantle yet,” says Abe. The unexpectedly smooth drilling at Lost City bodes well for those future attempts, which could be carried out by Japan’s research ship Chikyū, she adds. “Mantle rocks are the most common part of our entire planet,” says Evans. “Sampling them would tell us something fundamental about what our planet is made of.”

    [ad_2]

    Source link

  • Coevolution of craton margins and interiors during continental break-up

    [ad_1]

    Mapping the great escarpments

    We model escarpment features using the SRTM void-filled 15-arcsec GMTED2010 datasets digital elevation models (DEMs) from the United States Geological Survey (USGS). In the ESRI ArcMap 10.7.1 geodatabase, DEMs are mosaicked to produce composite raster DEM datasets, which we then use to map escarpments (in the World Geodetic System 1984 (WGS84) geographic coordinate system), using the Spatial Analyst toolset (slope, aspect and curvature). We then generate 100-m contours for the DEMs using the Spatial Analyst contour tool.

    Triangulated Irregular Network (TIN) surfaces are generated using the Create TIN (3D Analyst) tool from the 100-m contour polyline datasets. TIN generation uses the Delaunay method of triangulation and WGS84 Transverse Mercator projected coordinate system. TIN surfaces are composed of mass points (TIN nodes), hulls and breaklines (hard and soft). The hard breaklines generated across TIN surfaces isolate breaks in slope that characterize escarpments. Isolation of hard breaklines (export as a separate polyline layer) is carried out using the TIN Line (3D Analyst) tool. We then use manual editing to isolate those breaklines that represent escarpments by alternating between slope, aspect, DEM and curvature base layers.

    Characterizing escarpment orientations

    We next compare the orientation of discrete segments of escarpments and their associated COB (Extended Data Fig. 2). The mapped distributions of COBs are well established and described in key syntheses of plate-tectonic data59,60. However, we expect COBs to have some associated uncertainties, largely because these are zones rather than precise linear boundaries. For context, the global mean half-width of the COB ‘transition zone’ is approximately 90 km (ref. 61). As such, our distance analyses (Fig. 1g) described below is expected to carry uncertainties of ±90 km. In our analysis, we do not explicitly account for uncertainties in estimated distance to a given rift section, because these uncertainties are expected to be spatially correlated (that is, the same uncertainty value would apply to all points along a specific escarpment). COBs are exported from the open-source plate-tectonic software GPlates59,60 (https://www.gplates.org/). To enable this comparison, it is necessary to generate shapefiles for the escarpments with roughly equivalent complexity (degree of cartographic generalization) as the associated COBs. To achieve this, we use the open-source geographic information system applications QGIS (v3.16; https://www.qgis.org/) and GRASS (https://grass.osgeo.org/). Calculations are performed in the statistical computing package R (ref. 62; https://www.r-project.org/), using libraries sf (Simple Features)63, geosphere64, lwgeom and nngeo65.

    We use a buffer to find the midline, or skeleton (simplified escarpment), in GRASS. This is done by: (1) converting each escarpment from WGS84 (EPSG:4326) to an appropriate projected coordinate system with units of metres, not degrees; (2) generating a (merged) 50/100/500-km buffer around each escarpment; (3) reducing by buffering again by −45/−99/−495 km; (4) applying the v.voronoi.skeleton function to compute the midline. These simplified shapefiles are then read into the R package to estimate the difference in tangent between the escarpment and continent boundary using the following procedure.

    We read in these simplified escarpment and COB shapefile and define points (p(i)) every 10 or 50 km along each escarpment line. At each point, p(i), we then find the tangent, Tp(i), using the points either side (±10 or ±50 km) of the point of interest (Extended Data Fig. 2). We approximate the tangent by taking the line between p(i − 1) and p(i + 1). We then calculate the perpendicular to the tangent, L(i), and define the closest point of intersection, x(i), of line L(i) with the COB. Next, we calculate the tangent of the COB at x(i), Tx(i). We approximate Tx(i) by generating a 10-km or 50-km buffer around the point x(i). We then find the points of intersection of Tx(i) and the circular buffer (at which the boundary line enters and leaves the buffer) and use the line between these intersection points to estimate the angle. Finally, we calculate the difference (in degrees) between the tangents Tp(i) and Tx(i) and then the distance between points p(i) and x(i) (Fig. 1g–h, Extended Data Fig. 3).

    Analysing distances between escarpments and COBs

    The escarpment polylines are exported from ArcGIS as shapefiles, matching the format of the COB files. We analyse the first-order spatial and topological attributes of both features using the R package, specifically, the sf63, lwgeom, nngeo65 and geosphere64 packages.

    Nodes are defined at a spacing of every 10 km along both escarpment and COB polylines. The original coordinate reference system, GCS WGS84, is converted to appropriate EPSG codes for each region, allowing us to calculate distances in metres. Escarpment shapefiles are subsequently cleaned and converted to spatial features objects. Bounding boxes (buffer/crop) are defined for COB files, increasing the efficiency of searching for the closest point between escarpment and COB nodes. After cropping is complete, COB shapefiles are converted to spatial features objects. We use the dist2Line function to calculate the shortest distance in metres between escarpments and COBs (Fig. 1g and Extended Data Fig. 3a–c), with mean distances calculated using the ddply package66. Standard deviations are calculated for each plot using the sd function in the stats package in R.

    Lithospheric-thickness analysis

    We analyse lithospheric-thickness distributions for the escarpments using two different global reference models, LITHO1.0 (ref. 41) and LithoRef18 (ref. 42). We perform surface interpolation from the vector points map by splines (0.1° cell size) using the GRASS function v.surf.rst. Next, we generate regular points along the length of escarpment shapefiles at 1.0° (approximately 110-km) intervals, using a QGIS vector geometry tool (points along a geometry). This chosen resolution is broadly commensurate with the resolution of the global models41,42. Using these specified sampling points, we then use the Point Sampling Tool in QGIS to obtain lithospheric-thickness values at each point from the interpolated raster map, and visualize the results for each escarpment using the boxplot function in Matlab R2021b (https://www.mathworks.com/products/matlab.html) (Fig. 1g). The range of lithospheric-thickness estimates for each escarpment using both of the above models are given in Extended Data Fig. 3g. It must be noted that LITHO1.0 returns what we suggest to be high estimates, being on average 10–15% higher than the corresponding LithoRef18 values (Extended Data Fig. 3g). This discrepancy probably arises because the density structure and geometry of boundaries in LITHO1.0 are not optimized to satisfy field data and lithospheric-thickness proxies, which may result in overestimation of the LAB beneath thick continental lithosphere41. Therefore, although we provide both measures for completeness, we consider the LithoRef18 values to give a more accurate picture of lithospheric thickness beneath the escarpments.

    Thermomechanical models

    We use the finite element code ASPECT67,68,69 to compute the dynamic evolution of lithosphere and asthenosphere over a 100-Myr period. This geodynamic software solves the conservation equations of momentum, mass and energy for materials undergoing viscoplastic deformation70. We thereby use experimentally derived flow laws that account for temperature, pressure and strain-rate-dependent rheologies (Extended Data Table 1). The models are driven kinematically by prescribing velocity boundary conditions at lateral sides. The simulations generate a narrow rift that migrates laterally71, leading to a delay of lithospheric break-up. In agreement with previous work, pressure gradients beneath the rift induce pronounced rotational flow patterns72 within the asthenosphere. This flow destabilizes the base of the thermal lithosphere adjacent to the plate boundary, forming Rayleigh–Taylor instabilities that evolve self-consistently by sequential destabilization. Next, we describe the geometric and thermomechanical setup, along with the model limitations.

    The domain of our reference model (Fig. 2) is 2,000 km wide and 300 km deep and consists of 800 and 120 elements in the horizontal and vertical directions, respectively. We chose a vertical model extent of 300 km depth to encompass the low-viscosity asthenospheric layer that is particularly prone to accommodating rapid mantle flow (an extent of 410 km is also tested). Although the lower boundary of this weak layer is not well defined, a model depth of 300 km includes: (1) the region beneath the lithosphere in which seismic anisotropy indicates a high degree of deformation (for example, ref. 73); (2) the depth range at which dislocation creep dominates deformation, leading to particularly low viscosity (for example, Fig. 10c in ref. 74); and (3) the highest depth at which carbonated melts can be expected to further reduce rock viscosity (for example, ref. 75). In our model, the initial distribution of material involves four homogeneous layers: 20-km-thick upper crust, 15-km-thick lower crust, 125-km-thick mantle lithosphere and 140-km-thick asthenosphere. To initiate rifting in a predefined area, we define a weak zone that features a 25-km-thick upper crust and 100-km-thick mantle lithosphere representing typical mobile belt conditions41. These layer thicknesses gradually transition to ambient lithosphere over about 200 km. For visualization purposes, we distinguish a 30-km-thick asthenospheric layer beneath some parts of the lithosphere as a simplified representation of metasomatized mantle.

    The flow laws of each layer represent wet quartzite76, wet anorthite77, dry olivine74 and wet olivine74 for upper crust, lower crust, mantle lithosphere and asthenosphere, respectively (see Extended Data Table 1 for rheological and thermomechanical parameters). Our model involves frictional strain softening defined through a simplified piecewise linear function: (1) between brittle strain of 0 and 1, the friction coefficient is linearly reduced by a factor of 0.25; (2) for strains larger than 1, the friction coefficient remains constant at its weakened value. Viscous strain softening is included by linearly decreasing the viscosity derived from the ductile flow law by a factor of 0.25 between viscous strains 0 and 1.

    In our reference model, we use velocity boundary conditions with a total extension rate of 10 mm y−1, equivalent to 10 km Myr−1. To test the sensitivity of our overall findings to this extension rate, we varied the extension velocity to slow (5 mm y−1) and fast (20 mm y−1) (Supplementary Videos 2 and 3, respectively). In these cases, we found that the process of sequential delamination still occurs as described in the reference model. Furthermore, we conducted two model runs to assess the influence of time-dependent extension velocities and verified that, in these cases, the key results remain unchanged. Material flux through the left boundary is balanced by a constant inflow through the bottom boundary. The top boundary features a free surface69. For simplicity, we fix the right-hand side of the model. However, we verified that our conclusions do not change substantially if extension velocities are distributed symmetrically at both side boundaries (Supplementary Video 1). For example, in this symmetric model, three instabilities are generated with migration rates of 20 and 16 km Myr−1 (when measured relative to the absolute reference frame) that translate to 15 and 11 km Myr−1, respectively, when accounting for rightward advection with 5 km Myr−1 (that is, in the reference frame of the continent). These values fit reasonably well to the reference model (15–20 km Myr−1; Fig. 2).

    The surface and bottom temperatures are kept constant at 0 °C and 1,420 °C, respectively, whereas lateral boundaries are thermally isolated. The initial temperature distribution is analytically equilibrated along 1D columns before the start of the model by accounting for crustal radiogenic heat contribution, thermal diffusivity, heat capacity and thermal boundary conditions. We associate the bottom of the conductive lithosphere with the initial depth of the compositional LAB at a temperature of 1,350 °C. Below the lithosphere, the initial temperature increases adiabatically with depth. To smooth the initial thermal gradient between the lithosphere and the asthenosphere, we equilibrate the temperature distribution of the model for 30 Myr before the onset of extension.

    The development of sequential instabilities and their migration velocity is a function of sublithospheric viscosity11. The occurrence of seismic anisotropy in the shallow asthenosphere suggests that deformation is dominated by dislocation creep78. We therefore use a nonlinear flow law using experimentally derived values for dislocation creep in olivine74, such as an activation energy of 480 kJ mol−1. Previous numerical models have shown that the occurrence of delamination is particularly controlled by the activation energy79, with a permissible range of 360–540 kJ mol−1. To explore the effect of activation energy on migrating instabilities, we conducted modelling experiments in which we varied the asthenospheric activation energy while keeping all other parameters constant. By decreasing the activation energy to 440 kJ mol−1, the viscosity of the shallow asthenosphere and metasomatized layer became roughly two times smaller. As a result, the lateral migration rates of the instabilities generated by this model were about twice as fast as in the reference model. The proportionality further agrees with estimates of migration speed from analytical considerations11. Rheological experiments as well as numerical and analytical modelling therefore indicate that the process of sequential delamination is plausible and that migration takes place at rates of tens of kilometres per million years—a speed that is comparable with the inferred wave of surface erosion within the plateau (Fig. 3).

    Finally, we assessed the potential impact of the chosen model domain on our findings by increasing the depth to 410 km (Extended Data Fig. 5 and Supplementary Video 4). We chose this depth extent to avoid complexities associated with phase changes in the mantle transition zone. Most notably, this model shows that the process of sequential delamination occurs independently of the depth of the model domain. The migration velocity of the instabilities in this run is slightly more variable than in the shallower reference scenario but averages at a value of 15–20 km Myr−1, identical to that of the reference model. Notably, the spacing between two instabilities does not increase proportionally to the height of the convection cell. The 410-km model features convection cells approximately twice as high as in the reference model, whereas the distance between instabilities, when measured at the depth of the TBL, remains very similar (that is, the mean spacings for the reference model and 410-km model are 269 and 255 km, respectively). These observations lead us to conclude that, for the models to yield meaningful results, the TBL must be thin relative to the height of the convection cell—a criterion met in all cases described in our paper.

    When interpreting our results, the following model limitations must be kept in mind. (1) We focus on first-order thermomechanical processes and do not explicitly account for chemical alterations, melt generation and magma ascent. (2) For simplicity, we assume that the initial depth of the LAB does not vary on the thousand-kilometre scale. We verified that gradual changes in morphology of the initial LAB did not affect our overall conclusions. (3) For simplicity, we neglect further processes in our generic modelling strategy that may be deriving from the impingement of mantle plumes, along-strike lithospheric heterogeneities and large-scale mantle-flow patterns.

    Analytical models and Monte Carlo simulations

    We performed analytical modelling to estimate the magnitude of uplift and denudation resulting from the removal of the cratonic lithospheric keel, as described in ref. 11, using the parameters provided in Extended Data Table 2. It must be noted that this experiment considers only the density contrast between colder lithosphere and hotter asthenosphere11 and does not include compositional changes, for example, related to melt metasomatism. We assess the likely magnitude of erosion and denudation (equations (1)–(3)) by performing a Monte Carlo simulation, sampling parameters from probability distributions. We applied both uniform and beta distributions to represent natural variability in the parameters (Extended Data Table 2). For simplicity, we assume beta distributions with a standard deviation of 30% of the mean.

    For the unstable TBL or keel, we considered a thickness (b) range of 17–18 km, as inferred from xenolith geotherm analysis11 (see Extended Data Fig. 6 for a schematic). This value represents half of the total thickness of the TBL, with the LAB situated near its middle. Similarly, the temperature increase across this layer, ΔT, is expected to lie in the range 140–165 °C, based on xenolith geotherm analysis11. Because our primary focus is on the southern Africa region, in which the most thermochronological constraints are available (Fig. 3), we used a range of densities of the eroded rock (ρc) of 2,800–3,000 kg m−3 (ref. 80) to reflect a dominantly basaltic catchment in this region during the Cretaceous5,25,48. The uniform and beta distributions yield mean and maximum values for denudation of approximately 0.8 and 1.6 km, respectively (Extended Data Fig. 7). Over an extended time frame, that is, 106–107 years, dynamic mantle support will invariably increase this value.

    Thermochronological analysis

    In our study, we test a geodynamic model (Fig. 2) by determining the spatial trends of, and total amount of, cooling (related to exhumation) over a specific interval, that is, 180–0 Ma. Because we are concerned with the exhumation history of cratons, we mainly restrict our study to those regions inboard of escarpments (that is, hinterland plateaus). We compile the thermal histories for a total of 47 sites across arguably the most classic example, the Central Plateau of Southern Africa (Extended Data Fig. 4). Details on the thermal models used in the original thermochronology work are provided in Extended Data Table 3. To estimate the most probable timing of cooling and evaluate uncertainties across the 47 plateau sites (Extended Data Table 3), we use published best-fit tT paths, upper and lower envelopes encompassing time uncertainty, and individual model thermochronology curves (Extended Data Table 3). This information allows us to estimate the maximum temperature drop (max(dT/dt)) and its corresponding timing (\({t}_{\max }\frac{{\rm{d}}T}{{\rm{d}}t}\)), together with associated model uncertainties. Estimation of \({t}_{\max }\frac{{\rm{d}}T}{{\rm{d}}t}\) is not exact, particularly for sites that exhibit prolonged, gradual temperature change or with highly uncertain thermal histories. Different sources also estimate and present thermochronological uncertainty in different ways. Thus, for each site, we calculate a best estimate for \({t}_{\max }\frac{{\rm{d}}T}{{\rm{d}}t}\) (denoted tmid in Extended Data Table 3), together with a range (that is, tmin and tmax) accounting for available (published) model data and uncertainty estimates. These are shown as error bars in Extended Data Fig. 8 and are summarized in Extended Data Table 3. Using tmin, tmid and tmax, we then fit a simple beta distribution to enable Monte Carlo sampling of the time uncertainty at each site (Fig. 3a,b). We apply the same approach to 24 sites from Eastern Brazil (Extended Data Figs. 10a). Here several sites (n = 4) show a distinct two-stage cooling history, which we accommodate in our analysis (Extended Data Fig. 10d).

    For each thermochronology site, we interpolate to a regular (0.1-Myr resolution) time series over the period 180–0 Ma. We assume no (notable) temperature changes beyond the limits of the thermochronology data provided. We calculate the average temperature drop (dT/dt, in °C Myr−1) using a moving, symmetric window of ±0.9 Myr at each 0.1-Myr time step. The total temperature drop is calculated from the best-fit curves (dT total (bf) in Extended Data Table 3) and used to estimate the exhumation rate. Note there is no available temperature drop estimate for site Br90-3981 .

    For all sites, tmid is the best estimate of the timing of maximum temperature drop obtained from the best-fit curve for that locality. For sites at which we have upper, lower and best-fit curves, tmin and tmax are defined as the minimum and maximum time at which dT/dt ≥ 60% of max(dT/dt) over all three curves. Note that for some sites (for example, refs. 29,50,53), the upper and lower curves are described as defining the upper and lower 95% credible interval. For other sites (for example, ref. 82), the upper and lower curves define the ‘good-fit’ envelope.

    Where we have a best-fit curve and further minimum/maximum time estimates, we define tmin as the earlier time of either the minimum estimate or the first point at which dT/dt ≥ 60% of max(dT/dt) on the best-fit curve. Similarly, tmax is defined as the later time of either the maximum estimate or the latest point at which dT/dt ≥ 60% of max(dT/dt).

    For sites with several thermal history model runs, we calculate the earliest and latest times at which dT/dt ≥ 60% of the maximum over all model realizations, including ‘best’, ‘good’ and ‘acceptable’ model runs. For a given site, we define tmin as the 10th percentile minimum time and tmax as the 90th percentile maximum time estimate calculated from all model runs. The best estimate tmid is defined as the time of the peak max(dT/dt) for the best-fit model run. For sites at which we do not have individual model runs, the best estimate tmid is defined as the midpoint of the time interval at which dT/dt ≥ 90% of max(dT/dt) for the best-fit curve, to accommodate the lower resolution of these data. This approach gives uncertainty estimates that should be broadly comparable across localities.

    We used the same published thermochronological constraints for 46 sites to estimate the total amount of exhumation at each site (Fig. 3b and Extended Data Table 3). Using best-fit curves for each site, we compute the maximum modelled temperature drop Tmax − Tmin over the interval 180–0 Ma. We then divide the temperature difference (Tmax − Tmin) by the geothermal gradient, sampling from a beta distribution to capture the known uncertainty in the geothermal gradient. Geothermal gradients in Southern Africa today are estimated to range between 15 and 33 °C km−1 on average (ref. 83). Naturally, no single value of geothermal gradient can apply to the entire plateau. Hence, we consider a compilation spanning the present-day Southern African region84 to capture the plausible range. We represent uncertainty in the geothermal gradient by a beta distribution on the interval [10, 60] °C km−1 (informed by ref. 84) with a mean of 28 °C km−1, standard deviation 7.5 °C km−1 and parameters α = 3.3264 and β = 5.9136. The upper end of this range accounts for the very high values (38–46 °C km−1) favoured by Stanley et al.18 for Cretaceous Southern Africa. Distance is the shortest distance measured from the point location of the thermochronology site (longitude/latitude; Extended Data Table 3) to the COB line, calculated in R using the dist2Line function from the geosphere package64. Uncertainty in distance is again assumed to be ±90 km (see the section ‘Characterizing escarpment orientations’). We sample a distance offset using a beta distribution on the interval [−90, 90] km, with a mean of 0 km and standard deviation of 36 km, with parameters α = β = 2.625. In total, 20,000 samples are generated for both the geothermal gradient and distance offset for each thermochronology location (Fig. 3b).

    Finally, we plot profiles of AFT and AHe ages across the Southern African (Extended Data Fig. 9) and Eastern Brazilian plateaus (Extended Data Fig. 10), using the data compilations of Stanley et al.18 and Novo et al.85, respectively. Here we constructed the profiles across the plateaus perpendicular to the continental margins and escarpments. In the case of Southern Africa, we avoided the southern part of the plateau in which the escarpment strikes roughly east–west, as it would cause interference (sampling parallel to an escarpment at which young ages are expected). In Eastern Brazil, we aligned our profile to capture the region at which most AHe ages exist and extend further inland. We include a buffer of 100 km to capture as many points as possible along a given section. In the case of Southern Africa, we plot the closest distance between the measurement point (AFT or AHe) and the escarpment at the western end of the profile (Extended Data Fig. 9). The distances were measured using the dist2Line function in the R package geosphere64, following the procedure outlined above for the escarpment analysis.

    Accounting for potential kimberlite-related cooling

    The cooling detected by thermochronology studies can most parsimoniously be explained by denudation (for example, refs. 5,24,25,48). However, a component of the inferred cooling across Southern Africa (Fig. 3 and Extended Data Fig. 8) is feasibly related to magmatic cooling, for example (in the case of cratons), associated with kimberlite volcanism. We identify these potential cases to evaluate where the cooling is more likely to be denudational. Given the typical durations of cooling modelled in previous thermochronology investigations, these trends are unlikely to be driven by kimberlite magmatic cooling. Kimberlites are monogenetic volcanoes with probable eruption durations of hours or months86. Further, it has been shown that the largest kimberlite diatremes should cool down to ambient temperatures within 2–3 kyr of eruption onset87. Nevertheless, to explore whether such cooling could be important, we investigate all cases in which a kimberlite eruption age overlaps in time and space with the known locations of cooling (that is, thermochronology sites). In cases in which there was overlap, we completely (and conservatively) remove modelled cooling for a fixed time interval either side of the kimberlite radiometric age, using the kimberlite age compilation of Tappe et al.88. Specifically, we remove all cooling for sites at which there is a record of a kimberlite eruption dated within ±2 Myr (relative to the thermochronology time) and conservatively within a radius of 50 km, using thermochronology coordinates provided in Extended Data Table 3. Those purely denudational trends and those that account for potential kimberlite cooling are distinguished as red and black lines, respectively (Extended Data Fig. 8a). The above approach is considered conservative given the small length scales of kimberlite pipes (typically hundreds of metres) and that kimberlite eruptions (lasting on the order of several thousand years) and cooling of diatremes are known to be short-lived phenomena encompassing several thousand years at most (refs. 86,87).

    Distances were estimated from longitude/latitude coordinates for the thermochronology sites and kimberlite records using the R geosphere64 distm function, using distGeo to obtain an accurate estimate of the geodesic, based on the WGS84 ellipsoid. Our analysis shows that even using conservative spatial and temporal bounds, cooling directly linked to kimberlite volcanism makes, if anything, a comparatively minor contribution to the overall cooling trends (Extended Data Fig. 8a). Notably, the period experiencing the most frequent (possible) kimberlite-related cooling between 100 and 80 Ma is associated with a marked increase in sediment accumulation rates offshore Southern Africa; previously, this event has been linked to a concomitant massive increase in onshore denudation89. Hence we argue that the observed cooling is largely denudational rather than magmatic (corroborating earlier suggestions48,89). We instead argue that the temporal coincidence between some kimberlites and modelled cooling is probably related to a common fundamental underlying mechanism (for example, lithospheric delamination), as opposed to a causal link between kimberlites and cooling.

    Landscape-evolution model

    To investigate the evolution of topography, total erosion and erosion rate over time, we use the Fastscape landscape-evolution model, described in detail in ref. 58 (see ‘Code availability’). The model solves the SPL, which states that the rate of change of surface elevation, h, owing to river incision is proportional to local slope, S, and discharge, ϕ, put to some powers, n and m, respectively:

    $$\frac{\partial h}{\partial t}=-K{\phi }^{m}{S}^{n}$$

    (4)

    This relationship can be traced back to the pioneering works of Gilbert90 and, from a computational point of view, Howard et al.91. Assuming that rainfall is relatively uniform (compared with slope and drainage area), the relationship is often simplified to yield:

    $$\frac{\partial h}{\partial t}=-{K}_{{\rm{f}}}{A}^{m}{S}^{n}$$

    (5)

    the canonic form of the SPL, in which A is drainage area92. It is also known as the stream power incision model (SPIM) and its appropriateness at representing the main driver of landscape evolution in high-relief areas has been amply discussed93. In particular, much work has focused on the value of the exponents (m and n). It is unclear what the optimum values should be or on what they depend, but the ratio of the two, m/n, is close to 0.5 and can be derived from the concavity of river profiles. Here we used n = 1 and m = 0.4, as is commonly done. The method used to solve it is fully described in ref. 58. The other important equation that is solved is the biharmonic equation representing the elastic isostatic flexure of the lithosphere:

    $$D({w}_{xxxx}+2{w}_{xxyy}+{w}_{yyyy})=({\rho }_{{\rm{a}}}-{\rho }_{{\rm{s}}})gw+{\rho }_{{\rm{s}}}g(h-{h}_{0})$$

    (6)

    in which D is the flexural rigidity, given by:

    $$D=\frac{E{T}_{{\rm{e}}}^{3}}{12(1-{\nu }^{2})}$$

    (7)

    in which E is Young’s modulus, Te the effective elastic plate thickness, ν Poisson’s ratio, w the deflection of the lithosphere caused by the difference in height, h, and a reference value h0, g the gravitational acceleration and ρs and ρa are the surface and asthenospheric densities, respectively. It is solved using the spectral method developed in ref. 94. The version of Fastscape used in our study has been used in many publications, including ref. 95.

    For our purposes, the model captures the surface evolution of a continent over 50 Myr. The initial plateau topography (h0) was set to 500 m (considered broadly representative for Mesozoic South Africa96 and stable cratons globally15) and an erodibility coefficient, Kf, was set to 1 × 10−5 m1−2m per year, in which m is the area exponent in the SPL (m = 0.4). The topography and erosion characteristics were modelled using a Gaussian-shaped wave of uplift with a velocity of 20 km Myr−1 and half-Gaussian width of 200 km, with both properties informed by our thermomechanical simulations (Fig. 2). The amplitude was set to achieve 2,000 m of uplift after isostatic adjustment and taking into account the initial, pre-existing topography. This uplift wave mimics the dynamic topography generated by convective mantle flow. The maximum uplift rate varies from run to run, as it depends on all the other parameters, including wave width, velocity, relative densities and initial topography. For the ‘fixed’ parameters, that is, wave width, velocity and density ratio, the uplift rate varies linearly with the initial topography, with a range of approximately 1–0 mm year−1, as the assumed initial topography varies from 0 to 2,000 m.

    The topographic response in the model includes a flexural isostatic response with a crustal density of 2,800 kg m−3 and asthenospheric density of 3,200 kg m−3, as well as an effective elastic thickness (Te) of 20 km. This thickness is considered an average value for continental lithosphere/crust97 (note that ref. 97 argues that values of Te are often overestimated), which is further supported by the fact that it is not physically possible to ‘pin’ an escapement as a drainage divide if Te is too high21. Any surface erosion in the model leads to further isostatic uplift, meaning that any change in topography related to erosion requires about 6–7 times the amount of erosion. This value is broadly in line with our independently derived analytical model (equations (1)–(3)) which predicts up to 10 times amplification of uplift by erosion.

    In terms of boundary conditions, the left-hand and right-hand sides of the model have a fixed boundary at h = 0, representing the ‘base level’, which—in this case—corresponds to the ocean. The other boundary conditions are defined as periodic, meaning that a river flowing towards one of these boundaries (north or south) reappears on the other side of the model (south or north). This approach is commonly used when solving the SPL to avoid boundary effects. The routing algorithm98, which computes the direction of water flow, assumes that every drop of water falling on the model must eventually escape through one of the base-level boundaries. However, the specific path of escape is not predetermined; rather, it is internally computed from the local slopes and thus following the topographic evolution set by the uplift and fluvial erosion/carving. In the model results (Fig. 4), some water escapes through the left boundary and some escapes through the right boundary. Whether the escarpment becomes a divide and later evolves into a ‘pinned divide’ is not prescribed but instead results from a delicate balance between uplift and erosion. These points have been extensively discussed in the literature and are summarized in ref. 21.

    We conducted a sensitivity analysis by assessing the range of values for h0 that would provide a good fit to observations. To do this, and to determine the quality of our model, we calculated a misfit function assuming that the optimum plateau height is 1,650 ± 250 m (in line with expectations32,33 and the present-day topography, which is 1.0–1.5 km on average15; Fig. 1d), optimum denudation is 2,750 ± 500 m (refs. 5,25,29,31,48,50,53) and the optimum final position of the divide is 650 ± 100 km (assuming that the wave started moving at 500 km from the left-hand side of the model). As we are concerned with the final position of the drainage divide, we use the upper end of our empirical estimates of escarpment position (Fig. 1g). We perform 120 numerical experiments of landscape evolution to calculate the misfit function (Fig. 4d and Extended Data Figs. 11 and 12). A misfit value less than 1 indicates that realistic conditions are met (that is, the model predicts values that fall between the optimum value ± the assumed uncertainty) (white contours in Fig. 4d). Our analysis also identifies the maximum limit of acceptable values for Kf that would allow the plateau to survive until today (Fig. 1d), without prohibitively high levels of erosion (dashed line in Fig. 4d). We find that the range of values for h0 that provide a good fit to existing constraints is from approximately 0 to 1,000 m. On this basis, our preferred model (Fig. 4a–c) has a misfit value less than 1 and obeys observational constraints.

    Predicting thermochronology ages in the surface-process model

    To identify model limitations and guide future testing of our geodynamic model (for example, with improved spatial resolution of thermochronology studies and/or extra borehole data), we used Fastscape to predict AFT and AHe ages across the plateau and through time (Fig. 5) using the same model configuration as before (Fig. 4). To do this, we solved the 1D conduction/advection equation at each point to predict thermal histories, from which ages are then computed. For this, we use the erosion-rate history predicted by Fastscape. Our predictions exclude radiogenic heating effects, leading to an initially linear conductive steady-state geothermal gradient—a standard practice when predicting ages for relatively low-temperature systems such as AHe and AFT that are not very sensitive to the curvature of the geotherm. The ages are predicted from the thermal histories, using the same algorithms as in the Pecube software99,100, based on solving the solid-state diffusion equation in 1D inside a grain of given size assuming a cylindrical geometry for AHe using the algorithm in ref. 101 and on an annealing algorithm using the parameters in ref. 102 for AFT. Here the effects of radiation damage are omitted, potentially changing the absolute age values of the predicted ages, but—notably—not the first-order patterns of interest here. Our models cover a 50-Myr period, with an extra 50 Ma added to computed ages to account for an assumed ‘quiet’ post-uplift phase since the late Cretaceous, consistent with present-day low erosion rates (as measured by cosmogenic methods, for example, ref. 103), even along the escarpment. Note that, east of the last position of the mantle convective instability in our model (Fig. 4), the ages return to their assumed ‘un-reset’ value, that is, ≥100 Ma (Fig. 5).

    Testing broader applicability

    To explore the broader applicability of our model, we consider the Namibian margin as well as the escarpments and associated plateaus in Eastern Brazil and the Western Ghats (Fig. 1 and Extended Data Fig. 1). First, rifting commenced along the Namibian margin between 145 and 139 Ma, followed by continental break-up occurring from about 116 to 113 Ma (refs. 54,104). Indeed, inverse modelling of AHe dates from the Namibian margin, combined with AFT data, reveals an early abrupt cooling phase (10 °C Myr−1) from 130 to 100 Ma (ref. 105)—probably related to rift shoulder uplift and escarpment retreat—which mirrors South African and Brazilian patterns during continental break-up (see main text). Importantly, offshore AFT analysis and sediment accumulation records overlapping this cooling phase (from 110–80 Ma) indicate the subsequent removal of Precambrian material from the cratonic interior106, supporting a spatiotemporal shift inland in the locus of uplift and denudation. Consistent with this, large-scale denudation (that is, 2–3 km in total from 140 to 70 Ma) extended several hundred kilometres inboard of the escarpment107, sampling Precambrian lithologies from the Damara Belt and Otavi Group. More than half of this denudation occurred after rifting, closely matching observations in South Africa (Fig. 3) and consistent with our landscape-evolution models (Fig. 4b). Thermal history models across this broader region indicate near-continuous exhumation since the Upper Cretaceous108, suggestive of a continuing process. This is again consistent with the concept of sequential migration, with uplift and denudation occurring successively further inland. We can test this concept by considering the distance from the site of exhumation to the nearest COB. Our model (Figs. 2 and 4) predicts substantial exhumation in the interior plateau regions of Southern Namibia around 65–70 Ma. Supporting this, Gallagher and Brown107 infer 1–2-km denudation in this region during this period, whereas Wildman et al.106 report a Damara uplift phase within a similar time frame from 65 to 60 Ma. Although the specifics of the migration of uplift and denudation into the continental interior remains uncertain given available data, collectively, these observations support a common mechanism across the wider Southern African region associated with rifting and break-up.

    Like the Namibian margins, Eastern Brazil underwent rifting at about 139 Ma and continental break-up at around 118 Ma (ref. 54). To examine the applicability of our model to Eastern Brazil, and noting a paucity of thermal history analyses in the continental interior, we initially focus on the analysis in ref. 57, which extends nearly 1,000 km inland, enabling a comparison with trends on the Southern African plateau. We also consider AFT and AHe ages from across Eastern Brazil using a recent compilation of 1,248 ages85, predominantly featuring young ages in lowlands and near escarpments. As predicted by our landscape-evolution models (Figs. 4b and 5), we observe younger ages near the escarpments owing to high magnitudes of total erosion there (Extended Data Fig. 10a). We constructed a profile of AHe and AFT ages in the section in which AHe ages extend furthest inland and observed that, although scattered, the AHe ages decrease nearer the escarpments and decrease with distance into the interior of the plateau (Extended Data Fig. 10c), similar to the observed trend in Southern Africa (Extended Data Fig. 9b).

    Our landscape models predict that AFT ages may be much older on the plateau (Fig. 5). This is observed in the São Francisco Craton and marginal orogens, in which old ages (predominantly 350–280 Ma) relate to the Gondwanide orogeny (Extended Data Fig. 10a,b). It is perhaps not surprising that the AFT ages are barely, if at all, reset in the continental interior, in contrast to AHe (Extended Data Fig. 10a,c). This is because the closure temperature for the AFT system is higher than that for AHe (Fig. 5) and considering that the total exhumation across the highlands is low, at about 1.0–1.5 km (Extended Data Fig. 10e). Further, in a study of Northeastern Brazil, Sacek et al.109 recognized that the high variability in erodibility, and consequently AFT ages, may result from the formation of duricrust layers or cangas, which could lead to highly variable erosion rates, even under a smoothly varying uplift rate. This factor is probably important in the studied region as well.

    Nonetheless, many interior regions have experienced some degree of Cenozoic unroofing. Harman et al.56 identified early cooling around 130 Ma at the São Francisco Craton margins during rifting, followed by a later event (circa 60–80 Ma) affecting the cratonic interior of Northeastern Brazil, analogous to the uplift and denudation history of Namibia. Although Harman et al.56 attribute this to intracratonic basin inversion (considering the time separation, these authors rule out a role for rifting), the observed uplift of interior regions at these times is consistent with our model predictions (Extended Data Fig. 10c–e), thus offering a new explanation. Extrapolating from Southern African trends (that is, migration rates of the erosional wave; Fig. 3d), our model would predict exhumation onset in these more distal interior regions around 80–25 Ma, as well as recent and continuing exhumation. The observed cooling from about 80 Ma to the present day supports this prediction26,30,56,57,85,110,111. Although we do not suggest that all cooling in this region—characterized by several geotectonic provinces, cratonic fragments and tectonic weaknesses (for example, ref. 30)—is exclusively linked to mantle instabilities tied to rifting, our model is well supported by existing thermochronological data. The predictions from our model and thermochronology should prompt and inform further measurements, especially AHe analyses, in relatively understudied inland/highland regions of Eastern Brazil.

    Testing our model in the Western Ghats is more challenging; however, available data cannot definitively rule it out. The escarpment is associated with the rifting between Madagascar and India, which started around 100 Ma, followed by continental break-up between 86 and 82 Ma (refs. 54,104) (Extended Data Fig. 1). Another major escarpment along the eastern passive margin of India112 is associated with the earlier separation of India and Antarctica around 125–120 Ma (ref. 54). Further rifting and break-up between India and the Seychelles Microcontinent occurred at roughly 66 Ma. Because peninsular India is narrow compared with internally drained continental hinterlands such as Southern Africa and Brazil113, in the context of our model (Figs. 2 and 4), interior hinterland regions are expected to exhibit interference patterns in exhumation related to the activity of diachronous rift systems. This issue greatly complicates the identification of drivers of post-rift uplift in the continental hinterlands. During the Cretaceous and Cenozoic, regions inboard of the Western Ghats escarpment experienced high denudation rates (ranging from 50 to 150 m Myr−1, depending on the AFT parameters used) associated with syn-rift and post-rift phases113, consistent with model expectations (Fig. 4c). Although AFT-derived and mass-balance-derived denudation rates113 favour a peak in denudation intensity coinciding with Seychelles rifting114,115, it is hard to exclude the possibility of this signal constituting a lagged response to the Madagascar break-up, with post-rift exhumation targeting interior regions. Future studies can apply thermochronology to test these models.

    Further references are cited in the extended data81,116,117.

    [ad_2]

    Source link

  • China’s Chang’e-6 collects first rock samples from Moon’s far side

    China’s Chang’e-6 collects first rock samples from Moon’s far side

    [ad_1]

    China’s Chang’e-6 robotic Moon-lander has wrapped up two days of drilling into the surface of the far side of the Moon and the ascender has blasted back into space. The spacecraft, with its precious rock samples, is now in lunar orbit, waiting to dock with the orbiter for the trip back home. It is the first time samples have been taken from the far side of the Moon.

    The Chang’e-6 lander made a successful touch-down on the Moon early on Sunday morning (Beijing time) at a pre-selected site within the South Pole-Aitken (SPA) basin, the oldest and largest lunar impact basin. Since then, Chang’e-6 has autonomously deployed its drill and scoop to collect soil and lunar regolith — the rocky material covering the surface of the Moon. Together the samples are expected to weigh up to two kilograms. “The sampling process has gone very smoothly,” says Chunlai Li, the mission’s deputy chief designer at the National Astronomical Observatories in Beijing.

    With the specimens loaded and sealed, the ascender fired its engine at 7:38 am Tuesday morning to lift off from the landing site and reached the designated lunar orbit six minutes later, according to the China National Space Administration (CNSA).

    “China is successfully carrying out complex operations on the lunar far side,” says Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. “The launch of the ascent stage was the first time anyone has taken off from the far side.”

    Captivating basalt

    According to Li, Chang’e-6 precise landing location is 41.63 degrees south and 153.99 degrees west, which means that the samples will mainly consist of basalts — dark-coloured, cooled lava. Similar material has previously been brought back to Earth for analysis from the Moon’s near side.

    The age of the basalts is estimated to be around 2.4 billion years old—much younger than the SPA basin itself, says planetary geologist Alfred McEwen at the University of Arizona, Tuscon. “There should also be fragments of older rocks in the regolith they collected,” McEwen says.

    Scientists hope to use samples returned from the SPA to precisely measure the basin’s age, and improve their understanding of the early history of the Earth and other planets, notes planetary geologist Jim Head at Brown University, in Providence, Rhode Island.

    Regardless of whether this information can be gleaned from the samples, the scientific value of Chang’e-6 samples, if successfully returned, will be very high, he says. They will be the first rocks ever retrieved from the Moon’s far side, which is dramatically different from the near side. “Obtaining dates and compositional information from the many hundreds of fragments sampled by the Chang’e-6 drill and scoop is like a having treasure chest full of critical parts of lunar history, and will very likely revolutionize our view of the entire Moon,” he says.

    Rock then dock

    In the coming days, Chang’e-6 will face one of the trickiest parts of the whole mission — rendezvous and docking of the ascender with the orbiter and transferring the samples, says McDowell. “You have two robots orbiting the Moon separately at 5,900 kilometres per hour, which have to come together and touch each other gently without crashing into each other,” he says.

    The Chang’e-6 samples’ trip home is expected to last about three weeks, ending with a return capsule piercing through Earth’s atmosphere and landing in the grasslands of the Siziwang banner in northern China’s Inner Mongolia autonomous region around 25 June.

    Planetary scientist Michel Blanc at the Research Institute in Astrophysics and Planetology, in Toulouse, France, who watched the launch of Chang’e-6 on Hainan island a month ago and followed the key steps of the mission, says that the scientific impact of the mission cannot be over-emphasized, because it will not only bring the first sample from the lunar far side, but also from one of the lowest-altitude regions of the Moon, where the surface might be closest to the mantle.

    “We planetary scientists are crossing fingers for the success of the rest of the mission,” Blanc says.

    [ad_2]

    Source link