Tag: Energy

  • An Underwater Data Center in San Francisco Bay? Regulators Say Not So Fast

    An Underwater Data Center in San Francisco Bay? Regulators Say Not So Fast

    [ad_1]

    NetworkOcean isn’t alone in its ambitions. Founded in 2021, US-based Subsea Cloud operates about 13,500 computer servers in unspecified underwater locations in Southeast Asia to serve clients in AI and gaming, says the startup’s founder and CEO, Maxie Reynolds. “It’s a nascent market,” she says. “But it’s currently the only one that can handle the current and projected loads in a sustainable way.”

    Subsea secured a permit for each site and uses remotely operated robots for maintenance, according to Reynolds. It plans to fire up its first underwater GPUs next year and also is considering private sites, which Reynolds says would ease permitting complexity. Subsea claims it isn’t significantly increasing water temperature, though it hasn’t published independent reviews.

    NetworkOcean also believes it will cause negligible heating. “Our modeling shows a 2-degree Fahrenheit change over an 8-square-fot area, or a 0.004-degree Fahrenheit change over the surface of the body” of water, Mendel says. He draws confidence from Microsoft’s finding that water a few meters downstream from its testing warmed only slightly.

    Protected Bay

    Bay Area projects can increase water temperatures by no more than 4 degrees Fahrenheit at any time or place, according to Mumley, the ex-water board official. But two biologists who spoke to WIRED say any increase is concerning to them because it can incubate harmful algae and attract invasive species.

    Shaolei Ren, a University of California, Riverside, associate professor of electrical and computer engineering who’s studying the environmental impact of AI, compares plans for an underwater data center of NetworkOcean’s announced capacity, when running fully utilized, to operating about 300 bedroom space heaters. (Mendel disputes the concern, citing Project Natick’s apparently minimal impact.) A few years ago, a project that proposed using San Francisco Bay water to cool a data center on land failed to win approval after public concerns were voiced, including about temperatures.

    The San Francisco Bay is on average around a dozen feet deep, with salty Pacific Ocean water flowing in from under the Golden Gate Bridge mixing with fresh runoff from a huge swath of Northern California. Experts say it isn’t clear whether any location in the expanse would be suitable for more than a tiny demonstration between its muddy, shallow, salty, and turbulent parts.

    Further, securing permits could require proving to at least nine regulatory bodies and several critical nonprofits that a data center would be worthwhile, according to spokespeople for the agencies and five experts in the bay’s politics. For instance, under the law administered by the Conservation and Development Commission, a project’s public benefit must “clearly exceed” the detriment, and developers must show there’s no suitable location on land.

    Other agencies consider waste emissions and harm to the region’s handful of endangered fish and birds (including the infamous delta smelt). Even a temporary project requires signoff from the US Army Corps of Engineers, which reviews obstruction to ship and boat traffic, and the water board. “For example, temporarily placing a large structure in an eelgrass bed could have lingering effects on the eelgrass, which is a critical habitat for certain fish,” the water board’s Lichten says.

    NetworkOcean’s Kim tells WIRED that the company is cognizant of the concerns and is avoiding sensitive habitats. His cofounder Mendel says that they did contact one of the region’s regulators. In March, NetworkOcean spoke to an unspecified US Coast Guard representative about testing at the bottom of the bay and pumping in seawater as a coolant. The company later shifted to the current near-surface plans that don’t involve pumping. (A Coast Guard spokesperson declined to comment without more clarity on whom NetworkOcean allegedly contacted.)

    For permanent installations, Kim and Mendel say they are eyeing other US and overseas locations, which they declined to name, and that they are engaging with the relevant regulators.

    Mendel insists the “SF Bay” test announced last month will move forward—and soon. “We’re still building the vessel,” he says. A community of marine scientists will be keeping their thermometers close.

    [ad_2]

    Source link

  • Live music is a major carbon sinner — but it could be a catalyst for change

    Live music is a major carbon sinner — but it could be a catalyst for change

    [ad_1]

    Close-up of band 'Massive Attack' performing on stage with a red screen behind them with white text reading #lightworkers

    Live music is a major carbon emitter — by changing its practices, it can galvanize change elsewhere.Credit: Simon Chapman/LNP/Shutterstock

    On 25 August, the band Massive Attack performed to around 34,000 fans as part of an all-day live music festival in Bristol, UK. Nothing unusual in that — in many parts of the world, summer calendars are packed with such events. But this festival, Act 1.5, aspired to be something different. Billed as a “climate action accelerator”, it was the culmination of a five-year collaboration between Massive Attack and scientists at the Tyndall Centre for Climate Change Research at the University of Manchester, UK, to decarbonize the live music industry.

    Such efforts are much needed. Live performances are an increasingly important source of revenue for artists, and audiences love them, too: the multinational company Live Nation Entertainment reports that more than 145 million fans attended its over 50,000 events worldwide in 2023, a record. For every one, temporary sets must be constructed, venues supplied with energy, and performers, equipment and audiences transported, often over large distances.

    US singer Taylor Swift’s ongoing Eras Tour alone consists of 152 shows across 5 continents in 21 months. In 2010, researchers used figures from 2007 to estimate that the UK music industry produced some 540,000 tonnes of greenhouse-gas emissions annually, around 0.1% of the country’s total energy-related carbon dioxide emissions. Live music accounted for 74% of that (C. Bottrill et al. Environ. Res. Lett. 5, 014019; 2010). Those figures are likely to have risen.

    Many in the music industry are riffing ever more loudly on sustainability — in no small measure because of pressure from their fan bases. Kpop4planet, a campaign group run by fans of South Korean K-pop music, successfully petitioned the South Korean car maker Hyundai — for which the members of the K-pop supergroup BTS act as brand ambassadors — to scrap a coal-plant-powered supply deal in Indonesia. Radiohead, Coldplay, Billie Eilish and The 1975 are just some of the high-profile Western stars who have declared initiatives to make live events more sustainable.

    These initiatives have not always met with the approval of climate campaigners. When Coldplay was criticized for a biofuel partnership with a Finnish oil company in 2022, for example, the band described its efforts on sustainability as a “work in progress”. One common criticism, says Kyle Devine at the University of Oslo, who researches the environmental impact of the music industry, is that bands’ messaging often focuses on the actions of individual fans — for example, encouraging audience members to travel more sustainably, reduce their plastic use by carrying refillable bottles, or eat more plant-based food. When it comes to high-energy aspects of touring, such as stage power requirements and artists’ travel, the preferred solution is often offsetting, rather than reducing, emissions.

    Massive Attack’s collaboration with Carly McLachlan and her colleagues at the Tyndall Centre started from the premise that low-carbon practices should be the backing track to all aspects of staging live music. The resulting roadmap, published in 2021 (see go.nature.com/3xdyq5j), set out emissions-reduction targets for the UK live music industry in line with the 2015 Paris climate agreement. Its recommendations focus on curbing emissions in energy use, audience and artists’ travel, and sundry areas such as food and drink supply — all principles applied to the Act 1.5 festival, as a Careers Feature in Nature details. At the end of this year, the researchers will report on how sustainable and replicable such events are in practice for both organizers and attendees.

    Other initiatives also signal a shift in how the industry thinks about sustainability, says Devine. This year, the organizers of 36 arts festivals from across 8 Caribbean and Latin American countries joined together through the Cultura Circular Programme to discuss reducing the events’ environmental impact. The Climate Machine research group, part of the Massachusetts Institute of Technology’s Environmental Solutions Initiative, has undertaken a project co-funded by the Warner Music Group, Live Nation and Coldplay to analyse the carbon footprint of the live music industry, initially in the United Kingdom and United States, and suggest practical mitigating measures. That should be published in late 2025, says the group’s co-leader Norhan Bayomi, an environmental scientist and DJ.

    Some real change has already been achieved. In 2023, for the first time, the long-running Glastonbury Festival in the United Kingdom was powered solely by fossil-fuel-free grid electricity combined with solar photovoltaic and battery hybrid systems. In June 2024, Coldplay announced that direct emissions from its current world tour were almost 60% lower than those of its 2016–17 stadium tour, with 18 shows powered entirely by portable battery systems and 72% of all waste diverted from landfill.

    Historically, music has played a key part in social movements. The industry now has the chance to be a role model for real change — and audiences are receptive. In a survey of 350,000 live music fans across the United States, 72% said climate change is an important issue and 70% did not oppose artists speaking out about it (see go.nature.com/474sh69). And a 2022 report by researchers at the University of Glasgow, UK, found that music fans are more likely to care about climate change than are non-music fans (see go.nature.com/3x9drao).

    Some might say that live music is, by its nature, unsustainable, and that the best solution would be for performers to stop touring altogether. But that is a joyless answer — and there is an alternative. Richard Betts, a meteorologist and climate scientist at the University of Exeter, UK, thinks that change will come only when it is driven by those highest up in the music industry and backed by good science. Now is the time to be vocal.

    [ad_2]

    Source link

  • AI Cracks the Chemistry Code to Better, Longer-lasting Solar Panels

    AI Cracks the Chemistry Code to Better, Longer-lasting Solar Panels

    [ad_1]

    Abstract Chemistry Solar Energy Concept
    By integrating AI with automated synthesis, researchers at the University of Illinois significantly enhanced the stability of solar energy molecules, shedding light on the chemical factors influencing photostability. Credit: SciTechDaily.com

    Researchers have leveraged artificial intelligence to enhance the photostability of molecules for solar energy applications, achieving molecules four times more stable than previous ones.

    Their novel approach involved AI-driven closed-loop experimentation and automated chemical synthesis to uncover the underlying chemical principles of stability, offering fresh insights into molecular design for organic solar cells.

    Artificial intelligence is a powerful tool for researchers, but with a significant limitation: The inability to explain how it came to its decisions, a problem known as the “AI black box.” By combining AI with automated chemical synthesis and experimental validation, an interdisciplinary team of researchers at the University of Illinois Urbana-Champaign has opened up the black box to find the chemical principles that AI relied on to improve molecules for harvesting solar energy.

    Advancements in Light-Harvesting Molecule Stability

    The result produced light-harvesting molecules four times more stable than the starting point, as well as crucial new insights into what makes them stable — a chemical question that has stymied materials development.

    The interdisciplinary team of researchers was co-led by U. of I. chemistry professor Martin Burke, chemical and biomolecular engineering professor Ying Diao, chemistry professor Nicholas Jackson and materials science and engineering professor Charles Schroeder, in collaboration with along with University of Toronto chemistry professor Alán Aspuru-Guzik. They published their results today (August 28) in the journal Nature.

    “New AI tools have incredible power. But if you try to open the hood and understand what they’re doing, you’re usually left with nothing of use,” Jackson said. “For chemistry, this can be very frustrating. AI can help us optimize a molecule, but it can’t tell us why that’s the optimum — what are the important properties, structures and functions? Through our process, we identified what gives these molecules greater photostability. We turned the AI black box into a transparent glass globe.”

    UIUC Jackson Group
    Illinois researchers have opened up the AI “black box” to gain valuable new insight about chemistry for solar energy applications. Pictured, from left: Professor Charles Schroeder, Changhyun Hwang, Seungjoo Yi, professor Ying Diao, professor Nick Jackson, Tiara Charis, and Torres Flores. Credit: Michelle Hassel

    Solving Photostability With Closed-Loop Experimentation

    The researchers were motivated by the question of how to improve organic solar cells, which are based on thin, flexible materials, as opposed to the rigid, heavy, silicon-based panels that now dot rooftops and fields.

    “What has been hindering commercialization of organic photovoltaics is problems with stability. High-performance materials degrade when exposed to light, which is not what you want in a solar cell,” said Diao. “They can be made and installed in ways not possible with silicon and can convert heat and infrared light to energy as well, but the stability has been a problem since the 1980s.”

    Accelerating Discovery with Modular Chemistry and AI

    The Illinois method, called “closed-loop transfer,” begins with an AI-guided optimization protocol called closed-loop experimentation. The researchers asked the AI to optimize the photostability of light-harvesting molecules, Schroeder said. The AI algorithm provided suggestions about what kinds of chemicals to synthesize and explore in multiple rounds of closed-loop synthesis and experimental characterization. After each round, the new data were incorporated back into the model, which then provided improved suggestions, with each round moving closer to the desired outcome.

    The researchers produced 30 new chemical candidates over five rounds of closed-loop experimentation, thanks to building block-like chemistry and automated synthesis pioneered by Burke’s group. The work was done at the Molecule Maker Lab housed in the Beckman Institute for Advanced Science and Technology at the U. of I.

    “The modular chemistry approach beautifully complements the closed-loop experiment. The AI algorithm requests new data with maximized learning potential, and the automated molecule synthesis platform can generate the new required compounds very quickly. Those compounds are then tested, the data goes back into the model, and the model gets smarter — again and again,” said Burke, who also is a professor in the Carle Illinois College of Medicine. “Until now, we’ve been largely focused on structure. Our automated modular synthesis now has graduated to the realm of exploring function.”

    Unveiling the Secrets of Molecular Stability

    Instead of simply ending the query with the final products singled out by the AI, as in a typical AI-led campaign, the closed-loop transfer process further sought to uncover the hidden rules that made the new molecules more stable.

    As the closed-loop experiment ran, another set of algorithms was continuously looking at the molecules made, developing models of chemical features predictive of stability in light, Jackson said. Once the experiment concluded, the models provided new lab-testable hypotheses.

    “We’re using AI to generate hypotheses that we can validate to then spark new human-driven campaigns of discovery,” Jackson said. “Now that we have some physical descriptors of what makes molecules photostable, that makes the screening process for new chemical candidates dramatically simpler than blindly searching around chemical space.”

    To test their hypothesis about photostability, the researchers investigated three structurally different light-harvesting molecules with the chemical property they identified — a particular high-energy region — and confirmed that choosing the proper solvents made the molecules up to four times more light-stable.

    “This is a proof of principle for what can be done. We’re confident we can address other material systems, and the possibilities are only limited by our imagination. Eventually, we envision an interface where researchers can input a chemical function they want and the AI will generate hypotheses to test,” Schroeder said. “This work could only happen with a multidisciplinary team, and the people, resources, and facilities we have at Illinois, and our collaborator in Toronto. Five groups came together to generate new scientific insight that would not have been possible with any one of the sub-teams working in isolation.”

    Reference: “Closed-loop transfer enables AI to yield chemical knowledge” 28 August 2024, Nature.
    DOI: 10.1038/s41586-024-07892-1

    This work was supported by the Molecule Maker Lab Institute, an AI Research Institutes program supported by the U.S. National Science Foundation under grant no. 2019897.

    [ad_2]

    Source link

  • Is ultra cheap green hydrogen on the horizon?

    Is ultra cheap green hydrogen on the horizon?

    [ad_1]

    Hydrogen produced by splitting water with renewable energy is too expensive to take off, but a start-up hopes to bring down the cost with new electrolysers

    [ad_2]

    Source link

  • Inside China’s race to lead the world in nuclear fusion

    Inside China’s race to lead the world in nuclear fusion

    [ad_1]

    Hefei, China

    On a cold February morning in Hefei, the snow-blanketed grounds of the Chinese Academy of Science’s Institute of Plasma Physics (ASIPP) are unusually quiet. China’s New Year is approaching, and most people in the city are preparing for days of dragon-themed celebrations. But inside the institute, researchers are still hard at work. In a vast control room under a ceiling studded with red neon-lit stars, plasma physicist Xianzu Gong is taming a different kind of fiery beast.

    Gong’s dragon is a fusion research reactor: the Experimental Advanced Superconducting Tokamak (EAST). Tokamaks are doughnut-shaped machines that generate the same nuclear reactions that power the stars. They use magnetic fields to confine heated loops of plasma — a fluid-like state of matter containing ions and electrons — at temperatures hotter than the Sun’s core. The aim is to force atomic nuclei to fuse, releasing energy. This could be harnessed as a source of almost limitless clean power, if the scorching, unstable plasma can be maintained and controlled for long enough — a feat yet to be accomplished.

    Corralling the unruly plasma is gruelling work. Every day, Gong and his colleagues fire up around 100 shots of plasma from early morning until around midnight. By comparison, the Joint European Torus (JET) in Culham, UK, which was the world’s largest fusion-research facility before it closed last year, achieved 20–30 shots each day. “Almost no weekends, no holidays for us,” says Gong, who heads EAST’s physics and experimental operations.

    Gong Xianzu (right) and ASIPP director Song Yuntao shake hands after a successful experiment at the control center of EAST in Hefei, east China.

    Xianzu Gong (right), with Yuntao Song, ASIPP’s director-general.Credit: Huang Bohan/IMAGO via Alamy

    Although only a stepping stone to anticipated fusion power plants, EAST is one of the facilities that’s putting China on the map in the global race for nuclear fusion.

    The world’s most well-known fusion experiment is the US$22-billion International Thermonuclear Experimental Reactor (ITER), a giant tokamak being constructed in southern France, to which China is contributing. And in recent years, ambitious firms in the United States and elsewhere have raised billions of dollars to build their own reactors, which they say will demonstrate practical fusion power before state-led programmes do.

    At the same time, China is fast pouring resources into its fusion efforts. The Chinese government’s current five-year plan makes comprehensive research facilities for crucial fusion projects a major priority for the country’s national science and technology infrastructure. As a rough estimate, China could now be spending $1.5 billion each year on fusion — almost double what the US government allocated this year for this research, says Jean Paul Allain, associate director of the US Department of Energy’s Office of Fusion Energy Sciences in Washington DC. “Even more important than the total value is the speed at which they’re doing it,” says Allain.

    “China has built itself up from being a non-player 25 years ago to having world-class capabilities,” says Dennis Whyte, a nuclear scientist at the Massachusetts Institute of Technology (MIT) in Cambridge.

    Although no one yet knows whether fusion power plants are possible, Chinese scientists have ambitious timelines. In the 2030s, before ITER will have begun its main experiments, the country aims to build the China Fusion Engineering Test Reactor (CFETR), with the goal of producing up to 1 gigawatt of fusion power. If China’s plans work out, a prototype fusion power plant could follow in the next few decades, according to a 2022 road map (J. Zheng et al. The Innovation 3, 100269; 2022).

    “China is taking a strategic approach to invest in and develop its fusion energy programme, with a view of long-term leadership in the global field,” says Yasmin Andrew, a plasma physicist at Imperial College London.

    Building artificial suns

    Scientists have been trying to make fusion reactors work since the 1950s. The idea is to merge two hydrogen nuclei — which are positively charged and therefore repel each other — into a larger helium one. In the Sun, gravity generates enough pressure to do this; on Earth, high temperatures and strong magnetic fields are necessary. So far, however, researchers haven’t been able to keep fusion reactions running long enough to produce more energy than it takes to spark them.

    In late 2022, researchers at the US National Ignition Facility (NIF) in Livermore, California, announced a breakthrough when they briefly recovered more fusion energy than they put into their fuel. Using an alternative design to a tokamak, NIF fired 192 laser beams at a tiny pellet of the hydrogen isotopes deuterium and tritium, causing them to fuse. However, much more energy went into operating the lasers than was delivered to the target. Many researchers say the most practical approach to fusion energy will entail using a tokamak to confine a long-lived ‘burning plasma’, one in which the fusion reactions provide the heat needed to sustain it. One of ITER’s targets, seen as a general prerequisite for viable fusion plants, is to create a burning plasma that produces ten times the power that went into it.

    A man takes a picture of a module being assembled at the international nuclear fusion project Iter in Saint-Paul-les-Durance, southern France.

    The giant ITER fusion reactor, under construction in France.Credit: Nicolas Tucat/AFP via Getty

    If scientists can do this, fusion could offer a safer, cleaner alternative to conventional nuclear-fission power plants that split heavy uranium nuclei, producing radioactive waste that can remain dangerous for thousands of years. Fusion reactors would produce only short-lived waste. Another safety feature is that fusion reactions simply stop if the plasma falls below a certain temperature or density. And the process is expected to be more efficient than fission; the International Atomic Energy Agency says that fusion could generate four times more energy than does fission, per kilogram of fuel.

    It’s a particularly tantalizing prospect for China where, between 2020 and 2022, several regions experienced massive power outages owing to skyrocketing demand for electricity during frigid winters. Despite rapid progress in renewable energy, the country still generates more than half of its electricity from coal and remains the biggest contributor to global carbon emissions. And although China is aiming to achieve peak emissions by 2030 and carbon neutrality by 2060, its energy requirements are set to double over the next three decades. “We need innovations that reduce carbon — that’s our dream. Nuclear fusion energy can do this,” says plasma physicist Yuntao Song, ASIPP’s director-general.

    China’s vision

    In EAST’s control room, Gong prepares to fire another pulse of plasma with a click of his mouse. The plasma itself lies behind the control room’s wall of monitors, confined in a vacuum chamber that has the Chinese flag mounted on its roof. “Every shot could be in support for the future of fusion energy,” Gong says.

    China’s involvement in fusion began with building several small and medium-sized tokamaks using components from devices in Russia and Germany. In 2003, it joined the international ITER experiment, alongside the European Union, India, Japan, Korea, Russia and the United States.

    In 2006, China opened EAST, which has since racked up world records for sustaining plasma lasting minutes, instead of seconds. EAST’s knack for creating long-lived plasmas has made it an experimental workhorse for ITER, particularly for quickly cross-checking results, says Alberto Loarte, who heads ITER’s science division. “The research in China is extremely dynamic,” he says.

    Loarte cites how, this January, he and his colleagues spent a week running experiments at EAST, to verify that lining a reactor’s plasma-facing walls with tungsten can achieve a tightly confined plasma, even if the walls aren’t also coated with a boron layer to keep out impurities. (These findings will help ITER, at which in October 2023 researchers decided to switch wall-linings to tungsten instead of beryllium.) In many countries, such an effort would have taken months to organize, says Loarte. But in China, plans often come together in weeks because many research groups don’t require formal proposals or lengthy discussions to get to work.

    ITER originally aimed to start experiments in 2020 but has been plagued by delays. In July, researchers announced that it will push back its major experiments to 2039. Most ITER countries are developing their domestic fusion capabilities in parallel, but few are doing so as intensively as China, says Jeronimo Garcia Olaya, a fusion scientist at the French Alternative Energies and Atomic Energy Commission in Paris. “They are building a very ambitious programme,” says Olaya, who co-leads experiments at JT-60SA in Naka, Japan, currently the world’s largest tokamak in operation.

    Among China’s other research fusion reactors, besides EAST, is its HL-3 tokamak, opened in 2020 at the Southwestern Institute of Physics in Chengdu. Experiments at China’s facilities will feed into the next-generation CFETR, although construction still needs approval from the government. An official at ASIPP who didn’t want to be named couldn’t give a timeline for this, but says that the government is factoring ITER’s timeline into its decision. The CFETR, which will be slightly bigger than ITER, aims to bridge the gap between ITER — a purely experimental device — and demonstration plants that would generate electricity.

    A man workers within the vacuum vessel of EAST.

    A researcher in EAST’s vacuum chamber.Credit: Institute of Plasma Physics at Hefei Institutes of Physical Science, Chinese Academy of Sciences

    CFETR first aims to generate between 100 and 200 megawatts of net power: producing more power than went into heating the plasma, but not enough to cover the electricity used to operate the facility. By the 2040s, its goal is to deliver more than ten times as much heat as is directly put into the plasma, the milestone for viable fusion, and also to produce up to a gigawatt of net power. If this could be achieved, demonstration power plants would then produce grid electricity.

    CFETR’s engineering design report, released in 2022, places the facility ahead of several demonstration power plants, including the European Union’s and Japan’s proposed DEMO reactors — expected to begin their engineering designs in 2029 and 2025, respectively.

    China’s strength in fusion research lies not so much in stand-out engineering innovations, says Allain, as in its speed and focus on developing the materials, components and diagnostics systems needed to build reactors.

    To develop CFETR, ASIPP has started building a sprawling 40-hectare workshop (about the size of 60 football fields) a short drive from EAST. Scheduled for completion next year, the Comprehensive Research Facility for Fusion Technology (CRAFT) is a massive hub where researchers will develop and manufacture materials, components and prototypes for CFETR and subsequent fusion power plants.

    An aerial drone photo shows the park of Comprehensive Research Facility for Fusion Technology (CRAFT) in Hefei, east China's Anhui Province.

    An aerial shot of China’s Comprehensive Research Facility for Fusion Technology (CRAFT) in Hefei.Credit: Zheng Xianlie/Xinhua via Alamy

    In the United States, a similar facility to develop key fusion technologies has been flagged as a priority for years, but plans have failed to materialize owing to limited funding and other issues, says Whyte. “It has been frustrating,” he says. “There are positive signs of change, but we lost our lead.”

    China’s focus on building a fusion workforce has also given the country an edge in personnel, says Hongjuan Sun, a plasma physicist at the UK Atomic Energy Authority in Abingdon. “They really put a lot of effort in training the next generation,” says Sun, who worked on JET. Allain estimates that China has thousands of PhD students in fusion, compared with mere hundreds in the United States.

    Commercial efforts

    Although China’s programme is ramping up fast, start-up firms around the world make much bolder claims about the pace with which they can commercialize fusion energy.

    For example, Commonwealth Fusion Systems (CFS), a spin-off from MIT, promises that its tokamak, called SPARC, will be the first to churn out more fusion energy than the heat that the plasma consumes. The firm, which is based in Devens, Massachusetts, and is working with MIT researchers, says SPARC will produce its first plasma by the end of 2026. The effort relies on advances in high-temperature superconducting materials, which should allow the tokamak to be much smaller and quicker to build than ITER and other giant facilities. CFS says it will have plants supplying electricity grids by the early 2030s. Other firms are making similarly bullish statements about various designs for pilot fusion plants.

    Rendering of SPARC, a compact, high-field, DT burning tokamak, currently under design by a team from the Massachusetts Institute of Technology and Commonwealth Fusion Systems.

    A design rendering of the compact SPARC tokamak, being built in Devens, Massachusetts.Credit: CFS/MIT-PSFC — CAD Rendering by T. Henderson

    Globally, more than 40 companies are working to commercialize fusion, and together have received investments of $7.1 billion, says the US-based Fusion Industry Association (FIA).

    But China’s industrial efforts are burgeoning, too. The country’s fusion start-ups have attracted more than $500 million in investment in just a few years, says Andrew Holland, chief executive of the FIA. That places China second only to the United States, which has poured more than $5 billion into fusion companies. “The private fusion effort in China is significant,” he says.

    In January, the Chinese government launched a national consortium called China Fusion Energy. Led by the China National Nuclear Corporation, it brings together 25 government-owned companies, four universities and a private firm with the goal of pooling resources to accelerate China’s fusion effort.

    Among industrial heavyweights in fusion research is the ENN Group, one of China’s biggest private energy conglomerates. According to the FIA, the company has invested more than $200 million in its fusion energy programme. An ENN road map envisages building a ‘commercial demonstration’ reactor by 2035.

    A handful of dedicated fusion companies have sprouted up in China over the past three years. Among them is Energy Singularity, a Shanghai-based start-up founded in 2021 and the country’s first dedicated fusion power firm. Much like SPARC, Energy Singularity aims to build smaller, less expensive tokamaks by taking advantage of the latest materials for magnets; it has so far attracted around $110 million in funding, says co-founder Zhao Yang. In June, the firm’s HH70 tokamak achieved its first plasma and using high-temperature superconducting magnets — a world first, Yang says.

    Group photo of Energy Singularity team with the HH70 tokamak.

    The HH-70 tokamak from Energy Singularity, China’s first dedicated fusion-power company.Credit: Energy Singularity

    Energy Singularity is planning a next-generation device, HH170, which aims to produce ten times more energy than the heat needed to fuel the plasma. Just as optimistically as the US firms, Yang estimates that the small tokamak will take only three to four years to build, instead of decades.

    One of the big questions in fusion surrounds the availability of fuel. For tokamaks, a mixture of deuterium and tritium (D–T) isotopes is considered one of the most efficient fuels. But tritium occurs in minuscule traces in nature, so will need to be produced in fusion facilities, through a reaction between the neutrons produced during fusion reactions and a blanket of lithium in the tokamak wall. Whether such ‘tritium breeding’ can actually work is unclear.

    ITER is one of the largest research efforts that will explore this question. But China has speedier plans: its Burning Plasma Experimental Tokamak (BEST), built next to CRAFT and due to be completed in 2027, will also run D–T experiments and explore whether tritium can be bred, says ASIPP director Song.

    It’s all part of a long-term push to develop what many see as a key solution to the world’s energy problems. Back at EAST, in contrast to the bullish claims of private firms, Gong sees the race for fusion energy more as a marathon than a sprint. He has thousands of plasma shots ahead of him. “There’s still a lot of work we need to do,” he says.

    [ad_2]

    Source link

  • The US Grid Is Adding Batteries at a Much Faster Rate Than Natural Gas

    The US Grid Is Adding Batteries at a Much Faster Rate Than Natural Gas

    [ad_1]

    While solar power is growing at an extremely rapid clip, in absolute terms, the use of natural gas for electricity production has continued to outpace renewables. But that looks set to change in 2024, as the US Energy Information Agency (EIA) has run the numbers on the first half of the year and found that wind, solar, and batteries were each installed at a pace that dwarfs new natural gas generators. And the gap is expected to get dramatically larger before the year is over.

    Solar, Batteries Booming

    According to the EIA’s numbers, about 20 GW of new capacity was added in the first half of this year, and solar accounts for 60 percent of it. Over a third of the solar additions occurred in just two states, Texas and Florida. There were two projects that went live that were rated at over 600 MW of capacity, one in Texas, the other in Nevada.

    Next up is batteries: The US saw 4.2 additional gigawatts of battery capacity during this period, meaning over 20 percent of the total new capacity. (Batteries are treated as the equivalent of a generating source by the EIA since they can dispatch electricity to the grid on demand, even if they can’t do so continuously.) Texas and California alone accounted for over 60 percent of these additions; throw in Arizona and Nevada, and you’re at 93 percent of the installed capacity.

    The clear pattern here is that batteries are going where the solar is, allowing the power generated during the peak of the day to be used to meet demand after the sun sets. This will help existing solar plants avoid curtailing power production during the lower-demand periods in the spring and fall. In turn, this will improve the economic case for installing additional solar in states where its production can already regularly exceed demand.

    Wind power, by contrast, is running at a more sedate pace, with only 2.5 GW of new capacity during the first six months of 2024. And for likely the last time this decade, additional nuclear power was placed on the grid, at the fourth 1.1 GW reactor (and second recent build) at the Vogtle site in Georgia. The only other additions came from natural gas-powered facilities, but these totaled just 400 MW, or just 2 percent of the total of new capacity.

    The EIA has also projected capacity additions out to the end of 2024 based on what’s in the works, and the overall shape of things doesn’t change much. However, the pace of installation goes up as developers rush to get their project operational within the current tax year. The EIA expects a bit over 60 GW of new capacity to be installed by the end of the year, with 37 GW of that coming in the form of solar power. Battery growth continues at a torrid pace, with 15 GW expected, or roughly a quarter of the total capacity additions for the year.

    Wind will account for 7.1 GW of new capacity, and natural gas 2.6 GW. Throw in the contribution from nuclear, and 96 percent of the capacity additions of 2024 are expected to operate without any carbon emissions. Even if you choose to ignore the battery additions, the fraction of carbon-emitting capacity added remains extremely small, at only 6 percent.

    Gradual Shifts on the Grid

    Obviously, these numbers represent the peak production of these sources. Over a year, solar produces at about 25 percent of its rated capacity in the US, and wind at about 35 percent. The former number will likely decrease over time as solar becomes inexpensive enough to make economic sense in places that don’t receive as much sunshine. By contrast, wind’s capacity factor may increase as more offshore wind farms get completed. For natural gas, many of the newer plants are being designed to operate erratically so that they can provide power when renewables are under-producing.

    A clearer sense of what’s happening comes from looking at the generating sources that are being retired. The US saw 5.1 GW of capacity drop off the grid in the first half of 2024, and aside from a 0.2 GW of “other,” all of it was fossil fuel-powered, including 2.1 GW of coal capacity and 2.7 GW of natural gas. The latter includes a large 1.4 GW natural gas plant in Massachusetts.

    [ad_2]

    Source link

  • The cool technologies that could protect cities from dangerous heat

    The cool technologies that could protect cities from dangerous heat

    [ad_1]

    It’s time to brace for record-breaking heat. Last year was the hottest on record and 2024 is shaping up to be even more extreme, with the mercury soaring close to 50 °C on days in Nevada, Egypt and Australia. June marked the 13th month in a row of chart-topping temperatures globally. And four consecutive days in July were the hottest in recorded history for the entire planet.

    Scorching temperatures spur water shortages, damage crops, strain electricity grids and trigger heat stress and mass mortality — killing close to 500,000 people each year, according to one estimate1. So scientists are working hard to develop innovative ways to cool cities and slash electricity use in the warming world. Advances range from high-efficiency air conditioners to special materials that keep surfaces colder than their surroundings without using electricity.

    Researching refrigeration

    In most air conditioners and refrigerators, a fluid is compressed to transfer heat from inside the room or appliance to outside. But this process emits greenhouse gases and guzzles energy. Globally, air conditioners and electric fans consume about 20% of the electricity used in buildings, according to the International Energy Agency. And the agency predicts that the amount of energy required for air conditioning around the globe will surge threefold by 2050.

    With that in mind, many researchers are working to reduce the amount of energy that air conditioners consume. One potential solution emerged last year, when a team of researchers developed a technology that might make the appliances work much more efficiently2. And it has the added benefit of not relying on environmentally damaging liquid coolants.

    Emmanuel Defay, a researcher at the Luxembourg Institute of Science and Technology in Belvaux, and his colleagues built a device that relies on ‘electrocaloric’ cooling. In this process, an electric field is applied to change the position of atoms in an insulating ceramic. Because the field constrains the atoms’ movements, their vibrations increase and are converted into heat, raising the temperature of the material. Fluid carries that heat away to the outside. Once the heat has been removed, the field is shut off and the atoms in the ceramic can move more freely. That causes their vibrations to decrease and the ceramic’s temperature drops, a change that can be used for cooling purposes.

    The device was designed in collaboration with the Japanese manufacturing company Murata in Nagaokakyo, which already produces these kinds of ceramics for mobile phones, computers and other hardware. That will help to make the technology scalable, says Defay. But he warns that getting it into products might take time. He hopes that he and his team can work on the first niche cases, such as cooling down batteries in electric cars, within five years. Then, perhaps, they can tackle air conditioning in the next decade.

    Game-changing materials

    Other components — known as supercool materials — might be able to lower temperatures below ambient conditions without power.

    All materials reflect some portion of the sunlight that hits them, and all emit energy as heat. But supercool materials do both extremely well — reflecting most of their incident solar radiation and emitting a lot of their thermal radiation. That makes them cooler than their surrounding temperature.

    “These materials are potentially a game-changer,” says David Sailor, director of the School of Geographical Sciences and Urban Planning at Arizona State University in Tempe, who doesn’t develop these technologies but studies how they could be used in urban environments. Not only can they help to cool a building — thus reducing the demand for air conditioning — but they can also cool the outdoor air. “If a surface is always cooler than the air, then that surface is always taking heat out of the air as the air flows over it,” Sailor says. “So it’s actively cooling the urban atmosphere.”

    The first supercool material was designed in 2014 when Aaswath Raman, a materials scientist now at the University of California, Los Angeles, was conducting research at Stanford University, also in California. He and his colleagues created a cooling surface that was highly reflective at the visible wavelengths where the Sun’s radiation peaks, and emissive in the mid-infrared3. The latter was key. The atmosphere traps most of the infrared radiation emitted as heat by objects on Earth’s surface. But a specific band of infrared, with wavelengths of 8–13 micrometres, passes straight through the atmosphere and disappears into space. Supercool materials exploit that infrared window.

    Mounted on a roof, Raman’s technology — which was made of seven alternating layers of silicon dioxide and hafnium dioxide — stayed 5 °C cooler than the ambient air temperature.

    Combo of photo and infrared image of a parking lot showing a conventional dark seal-coated section adjacent to a lighter cool-sealed section.

    Researchers applied special paint to a car park (left; pale area) in Arizona. An infrared image shows that area was cooler than its surroundings.Credit: Edwin Ramirez/School of Geographical Sciences and Urban Planning, Arizona State University

    Since then, the field has exploded. In the laboratory, supercool materials have been built in the form of plastics, metals, paints and even wood.

    And scientists are continuing to push them further. In July, researchers at Sichuan University in Chengdu, China, reported that they had freeze-dried a solution of commercially obtained salmon sperm DNA and gelatin to create a supercool aerogel4. When placed outside, the aerogel cooled surfaces to up to 16 °C below the ambient air temperature.

    Xianhu Liu, a materials scientist at Zhengzhou University in China, who was not involved in the study, is particularly excited that the material uses less energy and reduces pollutants compared with other supercool materials that use additives, such as metal oxide nanoparticles. The fact that this work used biomass materials also means it’s degradable. “It’s a rare combination of sustainability and energy efficiency,” Liu says.

    But Raman doubts the accuracy of the study’s results. The team measured an air temperature 20–30 °C higher than that reported by weather stations for that day and location. “It seems likely that their temperature sensor was exposed to the Sun and heated up,” Raman says.

    However, Jian-Wen Ma, the study’s lead researcher, says that he and his team measured the air temperature inside a closed cooling test box, not the external atmospheric temperature4. “There is ongoing debate about cooling test methods,” he says. “The characterization of radiative cooling remains complex and imperfect, and we hope for a unified standard in the future.”

    Cool surfaces

    In his own research, Raman has moved from supercool materials to a category called cool materials, which are typically engineered to reflect most of their incident solar radiation, but not necessarily tailored to emit most of their thermal radiation.

    “You don’t need to do this fancy thing in the infrared, you just need to make it really solar reflective,” he says. That’s because most materials will still emit heat across the entire infrared spectrum and will reach temperatures slightly below those of ambient air if they are reflective enough, Raman says.

    Raman has also been working on another type of material for vertical surfaces such as building facades. This is a difficult application because walls face both the sky and the ground, so they absorb heat from Earth during the summer and lose heat to it during the winter.

    Raman’s team found a potential solution. In June, the researchers reported a physical mechanism that relies on a special material to cool or warm walls depending on the season5. The coating achieves this by selectively losing heat towards the sky and gaining much less heat from the ground in the summer. And in the winter, it loses less heat to the ground than a conventional wall does. To boot, the team has found that many low-cost materials have this unique property — including the polypropylene bags used for crisps, or potato chips. The discovery could be a boon for places that don’t have air conditioning, and could improve thermal comfort and even human health, he says.

    “I don’t want to advance the perception, especially among architects and engineers, that this is some faraway thing,” Raman says. “I want them to realize there’s a lot of things we can do right away.”

    Other researchers are also working on this problem. A report this month by Yuan Yang, a materials scientist at Columbia University in New York City, and his colleagues describes work in which they applied a supercool paint to a corrugated wall — but only to the sides of the wave-like pattern that face the sky. The team then applied a metal that has a low heat absorption to the side facing the ground, ensuring that side did not take up excess heat6. The surface temperature of the wall stayed 2–3 °C cooler than the ambient air temperature. The team is currently looking for funding to support further development.

    Other technologies are attempting to cool cities from the ground up. Last September, a team at Arizona State University in Tempe, led by Sailor, partnered with a US company that managed a large shopping centre to deploy and test a reflective ‘cool pavement’. The highly reflective coating is simply a lighter-coloured, asphalt-based seal that can be used instead of the conventional dark coating that is applied every five years or so to maintain the surface. As a test run, the lighter seal was applied to almost 6,000 square metres of the centre’s car park, and the surrounding area was coated with a conventional dark seal — allowing Sailor and his team to compare the two.

    The difference was like night and day. In the early afternoon, the surface temperature of the cool pavement was almost 8 °C cooler than the rest of the car park, and the air temperature above it was 0.8 °C cooler, says Sailor, who has not yet published this finding.

    Although the latter figure might not seem a noteworthy change, Sailor argues that if you could cool the entire city of Phoenix in Arizona by 0.6 °C, you would reduce energy consumption and water use and even improve human health outcomes. Sailor estimated that the drop in air conditioning alone would save around US$20 million.

    Shape-shifting materials

    Mohammad Taha, an engineer at the University of Melbourne in Australia, and their team are taking a different approach to cool homes and buildings. Early in 2023, the team described ‘phase-change inks’ consisting of suspended nanoparticles that change phase depending on the temperature, shifting from a superconductor at cool temperatures to a metal at hotter temperatures7.

    That trick allows the material to stay cool or warm, depending on the external temperature. In short, when the material heats up and becomes a metal, it adopts a linear structure that can reflect extra heat. When it cools and becomes a superconductor, the material adopts an insulating zig-zag structure that allows heat in.

    In the future, Taha hopes to apply this ink as a window coating. “If you look at the weakest link in a building in terms of heat loss, it’s the windows,” they say. “A building that’s entirely glass can actually feel like a greenhouse on a hot day.” That could change if Taha and their team can apply this ink to windows effectively in the future. Moreover, the researchers can engineer different coatings depending on the season — layering them to keep a building cool during summer and warm during winter.

    From lab to cities

    It’s not yet clear which of these cooling technologies are likely to have a big impact in future. Many have not left the lab and others have been deployed only in small-scale projects. For that reason, Sailor argues that researchers need to evaluate all new materials carefully before moving forward. “It’s really important that we in the science community — and I take this as one of my responsibilities in evaluating these technologies — focus not just on their strengths, but also on their weaknesses,” he says.

    The cool pavement, for example, has a potential drawback in that it reflects radiation upwards. Someone on that pavement when the Sun is high in the sky will feel the reflected radiation from below in addition to that from above. He suggests that the coating would be less suitable in areas such as playgrounds, where individuals might spend a long time on the surface in the middle of the day. A better option would be to use it on street pedestrian crossings, where individuals spend mere seconds, he says.

    There are also questions about how well supercool materials will work in a wide variety of climates. If it is cloudy or humid, for example, such materials might be less effective — that’s because water vapour traps the infrared radiation, stopping it from escaping into space8. Raman, however, argues that supercool materials can still perform well in these climates. Even if they can’t cool the air temperature sufficiently, he says, they won’t heat it up either9.

    Another unknown is whether consumers will embrace the idea. Even the simple measure of replacing old roofs with lighter, reflective ones has not been widely adopted. As part of his work, Sailor routinely takes an infrared camera up in a helicopter over Phoenix and is always amazed by the number of dark rooftops — despite the benefits of cooler, lighter roofs.

    Nonetheless, several cities are testing and deploying various mitigation technologies. Los Angeles, for example, has been adding cool pavements with the goal of increasing cool surfaces there by 30% by 2045.

    Mattheos Santamouris, a physicist at the University of New South Wales in Sydney, Australia, and his team have applied similar materials in more than 300 large-scale mitigation projects across the world. In January, his group detailed a multifaceted strategy to cool Riyadh by up to 4.5 °C10. “This is really the champion of the mitigation projects,” Santamouris says. The recommended approach, which Riyadh has started adopting, includes retrofitting buildings with both cool and supercool materials, and doubling the number of irrigated trees.

    Although the approach includes a suite of solutions, Santamouris’s team calculated that the addition of supercool materials will have the biggest impact. “The market is increasing tremendously,” Santamouris says about these materials. “There are more and more industrial producers around the world. I think that this is the future.”

    [ad_2]

    Source link

  • AI analysed 1,500 policies to cut emissions. These ones worked

    AI analysed 1,500 policies to cut emissions. These ones worked

    [ad_1]

    Smoke and steam bellows from the chimneys and cooling towers of Ratcliffe-on-Soar coal fired power station in England.

    Taxes were particularly effective at reducing emissions associated with electricity generation in high-income countries.Credit: Andrew Aitchison/In pictures via Getty

    Researchers used machine learning to analyse roughly 1,500 climate policies and identify those that have dramatically reduced carbon emissions. Their study, published in Science today, found that policies that combine several tools are more effective in slashing emissions than are stand-alone measures1.

    The analysis identified 63 interventions in 35 countries that led to significant reductions in emissions, cutting them by 19% on average. Most reductions were linked to two or more policies. Together, the 63 policies cut emissions by between 0.6 and 1.8 gigatonnes (Gt) of CO2 equivalent.

    Using the right mix of policies is more important than using a lot of policies, says Annika Stechemesser, a co-author and researcher at the Potsdam Institute for Climate Impact Research in Germany. For example, the UK’s phasing out of coal-fired power stations worked because it was used in tandem with pricing mechanisms, such as a minimum carbon price, while in Norway, banning combustion engine cars was most effective when combined with a price incentive that made electric cars cheaper.

    “To my knowledge, it is a first-of-its-kind study providing such a global evaluation,” says Jan Minx, an environmental economist with the Mercator Research Institute on Global Commons and Climate Change in Berlin.

    Road to reductions

    As part of the analysis, Stechemesser and her colleagues used a database of 1,500 climate policies implemented between 1998 and 2022 in 41 countries, including the top three greenhouse gas emitters globally: China, the United States and India. The policies fell into 48 categories, ranging from emission trading schemes to fossil-fuel subsidy reforms.

    “Previous evaluations have typically concentrated on a narrow set of prominent policies in selected countries, overlooking the hundreds of other measures,” Stechemesser says.

    The authors combined machine learning with a statistical analytical approach to identify large emission reductions in four high-emitting sectors — buildings, electricity, industry and transport. They compared the results with policies in the database to assess which policies and policy combinations led to the biggest emission drops.

    “This is a rather clever method,” says Zheng Saina, who has analysed global climate policies at Southeast University in Nanjing, China. The conventional way would have been to review the large number of policies and select the important ones, but that approach is subjective and cumbersome, she adds. “The authors instead used machine learning to detect major emissions changes. It is more objective.”

    Right mix

    The results showed that certain policy combinations worked better in specific sectors and economies. In terms of reducing emissions associated with electricity generation, for instance, pricing interventions such as energy taxes were particularly effective in high-income countries, but less so in lower-and-middle income countries.

    In the building sector, policy mixes that included phased out and banned emissions-generating activities more than doubled the reductions resulting from implementing those policies individually.

    Taxation was the only policy that achieved nearly equal or larger emission reductions as a stand-alone policy, as opposed to a policy mix, in all four sectors.

    Minx says the study’s AI-enhanced approach allowed the researchers, for the first time, to evaluate the effectiveness of a large number of climate policies from a global set of emission inventories covering different countries and sectors.

    For other researchers, the paper is alarming. “This study provides a warning to countries around the world that their climate policies have had very limited effects so far,” says Xu Chi, an ecologist at Nanjing University. “Existing polices will need to be re-evaluated, and changes will need to be made,” Xu adds.

    The world’s annual emissions are projected to be 15 Gt of CO2 equivalents higher by 2030 than would be required to keep global warming to less than 2 °C above pre-industrial levels, according to the United Nations.

    [ad_2]

    Source link

  • The Green Economy Is Hungry for Copper—and People Are Stealing, Fighting, and Dying to Feed It

    The Green Economy Is Hungry for Copper—and People Are Stealing, Fighting, and Dying to Feed It

    [ad_1]

    Moqadi Mokoena had been feeling uneasy all day. When he’d left his home on the outskirts of Johannesburg, South Africa, for his job as a security guard, he’d had to turn around twice, having forgotten first his watch and then his cigarettes. He had reason to be nervous. His supervisor had assigned him to join a squad protecting an electrical substation where, just two days earlier, four other guards had been stripped naked and beaten with pipes by gun-wielding thieves. Now, on this day in May of 2021, Mokoena and a fellow guard were at that substation, peering tensely through their truck’s windshield as a group of armed men approached.

    Mokoena pulled out his phone and called his wife, the mother of their 1-year-old daughter. He told her about the gang coming toward him. “I’m feeling scared,” he said. He didn’t have a gun himself. “I think they are the same ones who attacked our colleagues.”

    “Call your supervisor!” she told him.

    Minutes later, the men opened fire with at least one automatic weapon. Mokoena’s partner jumped out of the vehicle but was cut down by bullets. A third nearby guard dove for cover, shot back at the thieves, then ran for help. When he returned with the supervisor, they found Mokoena and his partner dead. Police later said the criminals made off with about $1,600 worth of copper cable.

    “We face these dangers every day,” the surviving guard later told a local journalist. “You don’t know if you’ll return home when you leave for duty.”

    In most places, power companies are a pretty dull business. But in South Africa they are under a literal assault, targeted by heavily armed gangs that have crippled the nation’s energy infrastructure and claimed an ever-growing number of lives. Practically every day, homes across the country are plunged into darkness, train lines shut down, water supplies cut off, and hospitals forced to close, all because thieves are targeting the material that carries electricity: copper.

    The battle cry of energy transition advocates is “Electrify everything.” Meaning: Let’s power cars, heating systems, industrial plants, and every other type of machine with electricity rather than fossil fuels. To do that, we need copper—and lots of it. Second to silver, a rarer and far more expensive metal, copper is the best natural electrical conductor on Earth. We need it for solar panels, wind turbines, and electric vehicles. (A typical EV contains as much as 175 pounds of copper.) We need it for the giant batteries that will provide power when the sun isn’t shining and the wind isn’t blowing. We need it to massively expand and upgrade the countless miles of power cables that undergird the energy grid in practically every country. In the United States, the capacity of the electric grid will have to grow as much as threefold to meet the expected demand.

    A recent report from S&P Global predicts that the amount of copper we’ll need over the next 25 years will add up to more than the human race has consumed in its entire history. “The world has never produced anywhere close to this much copper in such a short time frame,” the report notes. The world might not be up to the challenge. Analysts predict supplies will fall short by millions of tons in the coming years. No wonder Goldman Sachs has declared “no decarbonization without copper” and called copper “the new oil.”

    As the energy transition gathers speed, the value of copper has also soared. In the past four years, the price of a ton of copper has shot from about $6,400 to more than $9,000. That, in turn, has made electrical wiring, equipment, and even raw metal fresh from the mines into juicy targets for thieves. All around the world, hundreds of millions of dollars’ worth of the metal has been stolen—and countless lives have been lost. With the possible exception of gold, no other metal has caused so much death and destruction.

    [ad_2]

    Source link

  • Light bulbs have energy ratings — so why can’t AI chatbots?

    Light bulbs have energy ratings — so why can’t AI chatbots?

    [ad_1]

    Protesters outside Amazon headquarters holding extinction rebellion flags and a banner reading "Data Centres = Blackouts"

    As more data centres crop up in rural communities, local opposition to them has grown.Credit: Brian Lawless/PA/Alamy

    As millions of people increasingly use generative artificial intelligence (AI) models for tasks ranging from searching the Web to creating music videos, there is a growing urgency about minimizing the technology’s energy footprint.

    The worrying environmental cost of AI is obvious even at this nascent stage of its evolution. A report published in January1 by the International Energy Agency estimated that the electricity consumption of data centres could double by 2026, and suggested that improvements in efficiency will be crucial to moderate this expected surge.

    Some tech-industry leaders have sought to downplay the impact on the energy grid. They suggest that AI could enable scientific advances that might result in a reduction in planetary carbon emissions. Others have thrown their weight behind yet-to-be-realized energy sources such as nuclear fusion.

    However, as things stand, the energy demands of AI are keeping ageing coal power plants in service and significantly increasing the emissions of companies that provide the computing power for this technology. Given that the clear consensus among climate scientists is that the world faces a ‘now or never’ moment to avoid irreversible planetary change2, regulators, policymakers and AI firms must address the problem immediately.

    For a start, policy frameworks that encourage energy or fuel efficiency in other economic sectors can be modified and applied to AI-powered applications. Efforts to monitor and benchmark AI’s energy requirements — and the associated carbon emissions — should be extended beyond the research community. Giving the public a simple way to make informed decisions would bridge the divide that now exists between the developers and the users of AI models, and could eventually prove to be a game changer.

    This is the aim of an initiative called the AI Energy Star project, which we describe here and recommend as a template that governments and the open-source community can adopt. The project is inspired by the US Environmental Protection Agency’s Energy Star ratings. These provide consumers with a transparent, straightforward measure of the energy consumption associated with products ranging from washing machines to cars. The programme has helped to achieve more than 4 billion tonnes of greenhouse-gas reductions over the past 30 years, the equivalent of taking almost 30 million petrol-powered cars off the road per year.

    The goal of the AI Energy Star project is similar: to help developers and users of AI models to take energy consumption into account. By testing a sufficiently diverse array of AI models for a set of popular use cases, we can establish an expected range of energy consumption, and then rate models depending on where they lie on this range, with those that consume the least energy being given the highest rating. This simple system can help users to choose the most appropriate models for their use case quickly. Greater transparency will, hopefully, also encourage model developers to consider energy use as an important parameter, resulting in an industry-wide reduction in greenhouse-gas emissions.

    Corridor in a aata center server room with server racks

    Tools to quantify AI’s energy use can improve efficiency and sustainability.Credit: Getty

    Our initial benchmarking focuses on a suite of open-source models hosted on Hugging Face, a leading repository for AI models. Although some of the widely used chatbots released by Google and OpenAI are not yet part of our test set, we hope that private firms will participate in benchmarking their proprietary models as consumer interest in the topic grows.

    The evaluation

    A single AI model can be used for a variety of tasks — ranging from summarization to speech recognition — so we curated a data set to reflect those diverse use cases. For instance, for object detection, we turned to COCO 2017 and Visual Genome — both established evaluation data sets used for research and development of AI models — as well as the Plastic in River data set, composed of annotated examples of floating plastic objects in waterways.

    We settled on ten popular ways in which most consumers use AI models, for example, as a question-answering chatbot or for image generation. We then drew a representative sample from the task-specific evaluation data set. Our objective was to measure the amount of energy consumed in responding to 1,000 queries. The open-source CodeCarbon package was used to track the energy required to compute the responses. The experiments were carried out by running the code on state-of-the-art NVIDIA graphics processing units, reflecting cloud-based deployment settings using specialized hardware, as well as on the central processing units of commercially available computers.

    In our initial set of experiments, we evaluated more than 200 open-source models from the Hugging Face platform, choosing the 20 most popular (by number of downloads) for each task. Our initial findings show that tasks involving image classification and generation generally result in carbon emissions thousands of times larger than those involving only text (see ‘AI’s energy footprint’). Creative industries considering large-scale adoption of AI, such as film-making, should take note.

    AI's energy footprint. A scatter chart showing the total energy consumed by various models for five different tasks such as image generation and automatic speech recognition. The x axis unit is watt-hour. Image generation consumes the most energy and the average is similar to a laptop running for 20 hours.

    Source: Unpublished analysis by S. Luccioni et al./AI Energy Star project

    Within our sample set, the most efficient question-answering model used approximately 0.1 watt-hours (roughly the energy needed to power a 25W incandescent light bulb for 5 minutes) to process 1,000 questions. The least efficient image-generation model, by contrast, required as much as 1,600 Wh to create 1,000 high-definition images — that’s the power necessary to fully charge a smartphone approximately 70 times, amounting to a 16,000-fold difference. As millions of people integrate AI models into their workflow, what tasks they deploy them on will increasingly matter.

    In general, supervised tasks such as question answering or text classification — in which models are provided with a set of options to choose from or a document that contains the answer — are much more energy efficient than are generative tasks that rely on the patterns learnt from the training data to produce a response from scratch3. Moreover, summarization and text-classification tasks use relatively little power, although it must be noted that nearly all use cases involving large language models are more energy intensive than a Google search (querying an AI chatbot once uses up about ten times the energy required to process a web search request).

    Such rankings can be used by developers to choose more-efficient model architectures to optimize for energy use. This is already possible, as shown by our as-yet-unpublished tests on models of similar sizes (determined on the basis of the number of connections in the neural network). For a specific task such as text generation, a language model called OLMo-7B, created by the Allen Institute in Seattle, Washington, drew 43 Wh to generate 1,000 text responses, whereas Google’s Gemma-7B and one called Yi-6B LLM, from the Beijing-based company 01.AI, used 53 Wh and 147 Wh, respectively.

    With a range of options already in existence, star ratings based on rankings such as ours could nudge model developers towards lowering their energy footprint. On our part, we will be launching an AI Energy Star leaderboard website, along with a centralized testing platform that can be used to compare and benchmark models as they come out. The energy thresholds for each star rating will shift if industry moves in the right direction. That is why we intend to update the ratings routinely and offer users and organizations a useful metric, other than performance, to evaluate which AI models are the most suitable.

    The recommendations

    To achieve meaningful progress, it is essential that all stakeholders take proactive steps to ensure the sustainable growth of AI. The following recommendations provide some specific guidance to the variety of players involved.

    Get developers involved. AI researchers and developers are at the core of innovation in this field. By considering sustainability throughout the development and deployment cycle, they can significantly reduce AI’s environmental impact from the outset. To make it standard practice to measure and publicly share the energy use of models (for example, in a ‘model card’ setting out information such as training data, evaluations of performance and metadata), it’s essential to get developers on board.

    Drive the market towards sustainability. Enterprises and product developers play a crucial part in the deployment and commercial use of AI technologies. Whether creating a standalone product, enhancing existing software or adopting AI for internal business processes, these groups are often key decision makers in the AI value chain. By demanding energy-efficient models and setting procurement standards, they can drive the market towards sustainable solutions. For instance, they could set baseline expectations (such as requiring that models achieve at least two stars according to the AI Energy Star scheme) or support sustainable-AI legislation.

    Disclose energy consumption. AI users are on the front lines, interacting with AI products in various applications. A preference for energy-efficient solutions could send a powerful market signal, encouraging developers and enterprises to prioritize sustainability. Users can nudge the industry in the right direction by opting for models that publicly disclose energy consumption. They can also use AI products more conscientiously, avoiding wasteful and unnecessary use.

    Strengthen regulation and governance. Policymakers have the authority to treat sustainability as a mandatory criterion in AI development and deployment. With recent examples of legislation calling for AI impact transparency in the European Union and the United States, policymakers are already moving towards greater accountability. This can initially be voluntary, but eventually governments could regulate AI system deployment on the basis of the efficiency of the underlying models.

    Regulators can adopt a bird’s-eye view, and their input will be crucial for creating global standards. It might also be important to establish independent authorities to track changes in AI energy consumption over time.

    Taking stock

    Clearly, a lot more needs to be done to put a suitable regulatory regime in place before mass AI adoption becomes a reality (see go.nature.com/4dfp1wb). The AI Energy Star project is a small beginning and could be refined further. Currently, we do not account for energy overheads expended on model storage and networking, as well as data-centre cooling, which can be measured only with direct access to cloud facilities. This means that our results represent the lower bound of the AI models’ overall energy consumption, which is likely to double4 if the associated overhead is taken into account.

    How energy use translates into carbon emissions will also depend on where the models are ultimately deployed, and the energy mix available in that city or town. The biggest challenge, however, will remain the impenetrability of what is happening in the proprietary-model ecosystem. Government regulators are starting to demand access to AI models, especially to ensure safety. Greater transparency is urgently needed because proprietary models are widely deployed in user-facing settings.

    The world is now at a key inflection point. The decisions being made today will reverberate for decades as AI technology evolves alongside an increasingly unstable planetary climate. We hope that the Energy Star project serves as a valuable starting point to send a strong sustainability demand throughout the AI value chain.

    [ad_2]

    Source link