Tag: Energy

  • Light bulbs have energy ratings — so why can’t AI chatbots?

    Light bulbs have energy ratings — so why can’t AI chatbots?

    [ad_1]

    Protesters outside Amazon headquarters holding extinction rebellion flags and a banner reading "Data Centres = Blackouts"

    As more data centres crop up in rural communities, local opposition to them has grown.Credit: Brian Lawless/PA/Alamy

    As millions of people increasingly use generative artificial intelligence (AI) models for tasks ranging from searching the Web to creating music videos, there is a growing urgency about minimizing the technology’s energy footprint.

    The worrying environmental cost of AI is obvious even at this nascent stage of its evolution. A report published in January1 by the International Energy Agency estimated that the electricity consumption of data centres could double by 2026, and suggested that improvements in efficiency will be crucial to moderate this expected surge.

    Some tech-industry leaders have sought to downplay the impact on the energy grid. They suggest that AI could enable scientific advances that might result in a reduction in planetary carbon emissions. Others have thrown their weight behind yet-to-be-realized energy sources such as nuclear fusion.

    However, as things stand, the energy demands of AI are keeping ageing coal power plants in service and significantly increasing the emissions of companies that provide the computing power for this technology. Given that the clear consensus among climate scientists is that the world faces a ‘now or never’ moment to avoid irreversible planetary change2, regulators, policymakers and AI firms must address the problem immediately.

    For a start, policy frameworks that encourage energy or fuel efficiency in other economic sectors can be modified and applied to AI-powered applications. Efforts to monitor and benchmark AI’s energy requirements — and the associated carbon emissions — should be extended beyond the research community. Giving the public a simple way to make informed decisions would bridge the divide that now exists between the developers and the users of AI models, and could eventually prove to be a game changer.

    This is the aim of an initiative called the AI Energy Star project, which we describe here and recommend as a template that governments and the open-source community can adopt. The project is inspired by the US Environmental Protection Agency’s Energy Star ratings. These provide consumers with a transparent, straightforward measure of the energy consumption associated with products ranging from washing machines to cars. The programme has helped to achieve more than 4 billion tonnes of greenhouse-gas reductions over the past 30 years, the equivalent of taking almost 30 million petrol-powered cars off the road per year.

    The goal of the AI Energy Star project is similar: to help developers and users of AI models to take energy consumption into account. By testing a sufficiently diverse array of AI models for a set of popular use cases, we can establish an expected range of energy consumption, and then rate models depending on where they lie on this range, with those that consume the least energy being given the highest rating. This simple system can help users to choose the most appropriate models for their use case quickly. Greater transparency will, hopefully, also encourage model developers to consider energy use as an important parameter, resulting in an industry-wide reduction in greenhouse-gas emissions.

    Corridor in a aata center server room with server racks

    Tools to quantify AI’s energy use can improve efficiency and sustainability.Credit: Getty

    Our initial benchmarking focuses on a suite of open-source models hosted on Hugging Face, a leading repository for AI models. Although some of the widely used chatbots released by Google and OpenAI are not yet part of our test set, we hope that private firms will participate in benchmarking their proprietary models as consumer interest in the topic grows.

    The evaluation

    A single AI model can be used for a variety of tasks — ranging from summarization to speech recognition — so we curated a data set to reflect those diverse use cases. For instance, for object detection, we turned to COCO 2017 and Visual Genome — both established evaluation data sets used for research and development of AI models — as well as the Plastic in River data set, composed of annotated examples of floating plastic objects in waterways.

    We settled on ten popular ways in which most consumers use AI models, for example, as a question-answering chatbot or for image generation. We then drew a representative sample from the task-specific evaluation data set. Our objective was to measure the amount of energy consumed in responding to 1,000 queries. The open-source CodeCarbon package was used to track the energy required to compute the responses. The experiments were carried out by running the code on state-of-the-art NVIDIA graphics processing units, reflecting cloud-based deployment settings using specialized hardware, as well as on the central processing units of commercially available computers.

    In our initial set of experiments, we evaluated more than 200 open-source models from the Hugging Face platform, choosing the 20 most popular (by number of downloads) for each task. Our initial findings show that tasks involving image classification and generation generally result in carbon emissions thousands of times larger than those involving only text (see ‘AI’s energy footprint’). Creative industries considering large-scale adoption of AI, such as film-making, should take note.

    AI's energy footprint. A scatter chart showing the total energy consumed by various models for five different tasks such as image generation and automatic speech recognition. The x axis unit is watt-hour. Image generation consumes the most energy and the average is similar to a laptop running for 20 hours.

    Source: Unpublished analysis by S. Luccioni et al./AI Energy Star project

    Within our sample set, the most efficient question-answering model used approximately 0.1 watt-hours (roughly the energy needed to power a 25W incandescent light bulb for 5 minutes) to process 1,000 questions. The least efficient image-generation model, by contrast, required as much as 1,600 Wh to create 1,000 high-definition images — that’s the power necessary to fully charge a smartphone approximately 70 times, amounting to a 16,000-fold difference. As millions of people integrate AI models into their workflow, what tasks they deploy them on will increasingly matter.

    In general, supervised tasks such as question answering or text classification — in which models are provided with a set of options to choose from or a document that contains the answer — are much more energy efficient than are generative tasks that rely on the patterns learnt from the training data to produce a response from scratch3. Moreover, summarization and text-classification tasks use relatively little power, although it must be noted that nearly all use cases involving large language models are more energy intensive than a Google search (querying an AI chatbot once uses up about ten times the energy required to process a web search request).

    Such rankings can be used by developers to choose more-efficient model architectures to optimize for energy use. This is already possible, as shown by our as-yet-unpublished tests on models of similar sizes (determined on the basis of the number of connections in the neural network). For a specific task such as text generation, a language model called OLMo-7B, created by the Allen Institute in Seattle, Washington, drew 43 Wh to generate 1,000 text responses, whereas Google’s Gemma-7B and one called Yi-6B LLM, from the Beijing-based company 01.AI, used 53 Wh and 147 Wh, respectively.

    With a range of options already in existence, star ratings based on rankings such as ours could nudge model developers towards lowering their energy footprint. On our part, we will be launching an AI Energy Star leaderboard website, along with a centralized testing platform that can be used to compare and benchmark models as they come out. The energy thresholds for each star rating will shift if industry moves in the right direction. That is why we intend to update the ratings routinely and offer users and organizations a useful metric, other than performance, to evaluate which AI models are the most suitable.

    The recommendations

    To achieve meaningful progress, it is essential that all stakeholders take proactive steps to ensure the sustainable growth of AI. The following recommendations provide some specific guidance to the variety of players involved.

    Get developers involved. AI researchers and developers are at the core of innovation in this field. By considering sustainability throughout the development and deployment cycle, they can significantly reduce AI’s environmental impact from the outset. To make it standard practice to measure and publicly share the energy use of models (for example, in a ‘model card’ setting out information such as training data, evaluations of performance and metadata), it’s essential to get developers on board.

    Drive the market towards sustainability. Enterprises and product developers play a crucial part in the deployment and commercial use of AI technologies. Whether creating a standalone product, enhancing existing software or adopting AI for internal business processes, these groups are often key decision makers in the AI value chain. By demanding energy-efficient models and setting procurement standards, they can drive the market towards sustainable solutions. For instance, they could set baseline expectations (such as requiring that models achieve at least two stars according to the AI Energy Star scheme) or support sustainable-AI legislation.

    Disclose energy consumption. AI users are on the front lines, interacting with AI products in various applications. A preference for energy-efficient solutions could send a powerful market signal, encouraging developers and enterprises to prioritize sustainability. Users can nudge the industry in the right direction by opting for models that publicly disclose energy consumption. They can also use AI products more conscientiously, avoiding wasteful and unnecessary use.

    Strengthen regulation and governance. Policymakers have the authority to treat sustainability as a mandatory criterion in AI development and deployment. With recent examples of legislation calling for AI impact transparency in the European Union and the United States, policymakers are already moving towards greater accountability. This can initially be voluntary, but eventually governments could regulate AI system deployment on the basis of the efficiency of the underlying models.

    Regulators can adopt a bird’s-eye view, and their input will be crucial for creating global standards. It might also be important to establish independent authorities to track changes in AI energy consumption over time.

    Taking stock

    Clearly, a lot more needs to be done to put a suitable regulatory regime in place before mass AI adoption becomes a reality (see go.nature.com/4dfp1wb). The AI Energy Star project is a small beginning and could be refined further. Currently, we do not account for energy overheads expended on model storage and networking, as well as data-centre cooling, which can be measured only with direct access to cloud facilities. This means that our results represent the lower bound of the AI models’ overall energy consumption, which is likely to double4 if the associated overhead is taken into account.

    How energy use translates into carbon emissions will also depend on where the models are ultimately deployed, and the energy mix available in that city or town. The biggest challenge, however, will remain the impenetrability of what is happening in the proprietary-model ecosystem. Government regulators are starting to demand access to AI models, especially to ensure safety. Greater transparency is urgently needed because proprietary models are widely deployed in user-facing settings.

    The world is now at a key inflection point. The decisions being made today will reverberate for decades as AI technology evolves alongside an increasingly unstable planetary climate. We hope that the Energy Star project serves as a valuable starting point to send a strong sustainability demand throughout the AI value chain.

    [ad_2]

    Source link

  • Rice University Unleashes Flash Innovation for Faster Material Synthesis

    Rice University Unleashes Flash Innovation for Faster Material Synthesis

    [ad_1]

    Flash Joule Heating
    The innovative research builds on Tour’s 2020 development of waste disposal and upcycling applications using flash Joule heating. Credit: James Tour’s Lab/Rice University

    Researchers at Rice University have unveiled a groundbreaking method called flash-within-flash Joule heating (FWF), capable of producing high-quality solid-state materials rapidly and sustainably.

    This innovative technique not only minimizes energy use, water consumption, and greenhouse gas emissions by over 50%, but also allows the creation of advanced materials like semiconductor and aerospace components, setting a new standard in manufacturing efficiency and environmental responsibility.

    Flash-Within-Flash Joule Heating

    James Tour’s lab at Rice University has developed a new method known as flash-within-flash Joule heating (FWF) that could transform the synthesis of high-quality solid-state materials, offering a cleaner, faster, and more sustainable manufacturing process. The findings were published on August 8 in Nature Chemistry.

    Traditionally, synthesizing solid-state materials has been a time-consuming and energy-intensive process, often accompanied by the production of harmful byproducts. But FWF enables gram-scale production of diverse compounds in seconds while reducing energy, water consumption, and greenhouse gas emissions by more than 50%, setting a new standard for sustainable manufacturing.

    The Science Behind FWF

    The innovative research builds on Tour’s 2020 development of waste disposal and upcycling applications using flash Joule heating, a technique that passes a current through a moderately resistive material to quickly heat it to over 3,000 degrees Celsius (over 5,000 degrees Fahrenheit) and transform it into other substances.

    “The key is that formerly we were flashing carbon and a few other compounds that could be conductive,” said Tour, the T.T. and W.F. Chao Professor of Chemistry and professor of materials science and nanoengineering. “Now we can flash synthesize the rest of the periodic table. It is a big advance.”

    James Tour
    James Tour is the T.T. and W.F. Chao Professor and professor of chemistry at Rice University’s Wiess School of Natural Sciences. Credit: Gustavo Raskosky/Rice University

    Breakthrough in Material Production

    FWF’s success lies in its ability to overcome the conductivity limitations of conventional flash Joule heating methods. The team — including Ph.D. student Chi Hun “Will” Choi and corresponding author Yimo Han, assistant professor of chemistry, materials science and nanoengineering — incorporated an outer flash heating vessel filled with metallurgical coke and a semiclosed inner reactor containing the target reagents. FWF generates intense heat of about 2,000 degrees Celsius, which rapidly converts the reagents into high-quality materials through heat conduction.

    This novel approach allows for the synthesis of more than 20 unique, phase-selective materials with high purity and consistency, according to the study. FWF’s versatility and scalability is ideal for the production of next-generation semiconductor materials such as molybdenum diselenide (MoSe2), tungsten diselenide, and alpha phase indium selenide, which are notoriously difficult to synthesize using conventional techniques.

    Implications for Industry and Research

    “Unlike traditional methods, FWF does not require the addition of conductive agents, reducing the formation of impurities and byproducts,” Choi said.

    This advancement creates new opportunities in electronics, catalysis, energy and fundamental research. It also offers a sustainable solution for manufacturing a wide range of materials. Moreover, FWF has the potential to revolutionize industries such as aerospace, where materials like FWF-made MoSe2 demonstrate superior performance as solid-state lubricants.

    “FWF represents a transformative shift in material synthesis,” Han said. “By providing a scalable and sustainable method for producing high-quality solid-state materials, it addresses barriers in manufacturing while paving the way for a cleaner and more efficient future.”

    Reference: “Flash-within-flash synthesis of gram-scale solid-state materials” by Chi Hun ‘William’ Choi, Jaeho Shin, Lucas Eddy, Victoria Granja, Kevin M. Wyss, Bárbara Damasceno, Hua Guo, Guanhui Gao, Yufeng Zhao, C. Fred Higgs III, Yimo Han and James M. Tour, 8 August 2024, Nature Chemistry.
    DOI: 10.1038/s41557-024-01598-7

    This study was supported by the Air Force Office of Scientific Research, U.S. Army Corp of Engineers, and Welch Foundation.

    [ad_2]

    Source link

  • The Quest to Uncover the Secrets of Gold Hydrogen

    The Quest to Uncover the Secrets of Gold Hydrogen

    [ad_1]

    This story originally appeared on WIRED Italia and has been translated from Italian.

    In the quest to decarbonize the world, one element gets a lot of hype: hydrogen. “If you burn it, it produces only water, with no impact on the environment,” explains Alberto Vitale Brovarone, a professor in the Department of Biological, Geological, and Environmental Sciences at the University of Bologna in Italy. Hydrogen’s supporters believe it can be a solution for cleaning up everything from transport to agriculture to heavy industry.

    But its green credentials only stack up if you can produce it without emitting carbon. And this is why some are getting very excited about geological or “gold” hydrogen, the name given to the element when it forms naturally underground. This can happen as a result of a chemical reaction between water and iron-rich rocks, or by radiolysis, the splitting of water molecules by radiation into hydrogen and oxygen.

    “Compared to other types of hydrogen, it does not require energy to be produced,” says Vitale Brovarone. He therefore predicts a gold hydrogen rush is on the horizon. The problem is we know very little about the element when it forms naturally underground, and so the research world is in a race against time to find out more before hasty and blind mass mining begins. “From the industry’s point of view, it simply has to be extracted,” says Vitale Brovarone. “Instead, first it has to be understood how simply that can be done and with what consequences.”

    Vitale Brovarone and his colleagues believed Greenland could help answer these questions, and so they organized a special mission to the Arctic territory to hunt for more information, as part of the five-year ERC CoG DeepSeep program funded by the European Union.

    Alongside Vitale Brovarone were four scientists from the University of Bologna, one from the Institute of Geosciences and Georesources at Italy’s National Research Center, and one from the University of Copenhagen. They spent 10 days in this land of nearly 2-billion-year-old rocks, having spent six months preparing their mission using maps and satellite data. Despite their meticulous planning, they had to be adaptable. Due to “unforeseen icebergs” the researchers had to change areas, while at one point a bear spotted in their vicinity forced them to seek shelter in a school. But in the end, the trip was worthwhile: It gave them samples rich in H2 to study.

    Across the world, gold hydrogen is popping up where we don’t expect it, creating questions about the dynamics by which the element accumulates in reservoirs and the role it plays in subsurface ecosystems. There are already some concerns: If the hydrogen reacts with geological substrates or is processed by certain microorganisms, geological hydrogen can produce methane or hydrogen sulfide. Vitale Brovarone uses these two examples to explain to why diving headfirst into extracting gold hydrogen risks creating new problems instead of solving existing ones, and why more information is needed.

    Since we do not fully know what has been regulating the presence of H2 rocks for millions or billions of years, it is better to wait before breaking them by extracting the element, Vitale Brovarone says. The same goes for storing artificially produced hydrogen in reserves underground, he says. The idea of being able to do so has already excited industry, prompting them to move in a timeframe that is not compatible with what the research world needs to understand how the gas behaves.

    “We travel on different lines and at different pace,” he says. “We need to understand how hydrogen behaves in nature, because many dynamics only emerge after years. Industry would like quick and decisive answers; science needs time, and also funds, which, for hydrogen, are still scarce.” Unlike France, Australia, and the United States, which have their sights set on harvesting gold hydrogen, Italy has not yet invested in gathering it, preferring to bet on hydrogen production instead. Thanks in part to the University of Bologna expedition, however, Italy becomes one of the few countries in the world looking to understand more about it.

    [ad_2]

    Source link

  • Combatting power grid vulnerabilities from climate change

    Combatting power grid vulnerabilities from climate change

    [ad_1]

    James Conlin, Product Manager at Sharper Shape, discusses cutting-edge solutions that enable utilities to overcome power grid vulnerabilities imposed by climate change.

    The global power grid is facing unprecedented challenges as climate change intensifies. With the increasing frequency of extreme weather events, understanding and addressing power grid vulnerabilities has become a critical priority.

    The infrastructure, including power lines and poles, is susceptible to various environmental stresses such as wind, heat, and wildfires, which can significantly impact its reliability and resilience. A recent report from Bloomberg highlighted growing power grid vulnerabilities worldwide, with outages from Albania to Texas.

    Technological advancements like remote sensing, automated asset inspection services, and digital twins are transforming how utilities monitor and maintain their networks. Drones equipped with AI algorithms offer real-time data collection, enhancing the efficiency of inspections and reducing the need for large on-site crews. Digital twins provide a comprehensive virtual replica of physical assets, enabling precise planning and predictive maintenance.

    Embracing these advancements is essential for enhancing the resilience of power grids, ensuring they can withstand the growing impacts of climate change and continue to deliver reliable electricity to consumers worldwide.

    We spoke to Sharper Shape’s James Conlin to find out more.

    Given the increasing failure of power grids due to climate change, what specific vulnerabilities do you see in the current electricity networks globally?

    We’ve constructed a vast and complex power grid over the last century, involving immense human effort in often uncontrolled environments. The current major challenge is cataloguing and chronicling the properties of this infrastructure as it exists today.

    For example, power lines can be affected by various weather conditions, such as wind causing movement or heat causing expansion. These changes occur throughout the life cycle of the infrastructure. Therefore, understanding these dynamics is crucial. This is where technologies like remote sensing come into play. They help us monitor the proximity of lines to trees and simulate what happens under various conditions, such as thermal expansion due to load or weather forces like wind. Utilities often lack a comprehensive understanding of these power grid vulnerabilities, which makes this a significant challenge.

    ©shutterstock/winyuu

    How can automated asset inspection services, such as drones and advanced tools, help improve the resiliency of power line infrastructure?

    Resiliency is the ability of infrastructure to withstand extraordinary events. Utilities talk about both reliability, which is normal operation, and resilience, which is the ability to handle unusual stress like strong winds. Automated tools like drones and AI algorithms act as force multipliers. Instead of sending a crew of six with bucket trucks, which is costly and time-consuming, a drone can be operated by one or two people.

    This allows for real-time data collection and streaming to platforms where qualified electrical workers can quickly review it. For about five years, this has been phase one of using drones and remote sensing. The exciting next step involves the automatic recognition of utility components within images. This means machines can identify and assess components like ceramic insulators without human intervention, speeding up the inspection and prioritisation process.

    How do these new services compare to traditional methods?

    Traditionally, a worker would drive to a pole, inspect it, and then move to the next one, potentially covering only a few poles per day. With drones and remote sensing, we can inspect many more poles in the same timeframe. For example, a single inspector might review data from 40 to 60 poles in a day. This approach also introduces new metrics, such as the number of clicks an inspector makes to review the data. Now, with tools like our asset insights, we can prioritise inspections based on the likelihood of faults, allowing for faster recognition and repair. This significantly accelerates the entire process from inspection to maintenance.

    Can you explain the role that digital twin technology plays and how it can predict and prevent grid failures?

    Digital twins are virtual replicas of physical assets. When constructing a factory, it makes perfect sense to build a digital twin first. This digital model allows for precise planning—every sensor, every piece of machinery, and every workspace is accounted for down to the millimetre. You can determine where each component will be placed and how they will interact.

    However, the power grid, which was built over many decades, often lacks this level of documentation. Many utilities do not have accurate records of their infrastructure. They might know they paid a vendor to install a pole according to engineering specifications, but on-site adjustments—like using different insulators or splicing conductors due to shortages—often go undocumented. These discrepancies can lead to inefficiencies and increased vulnerability.

    With a digital twin, utilities gain centralised storage and access to all geospatial content related to their network. This includes not just photographs but detailed metadata. For instance, when you take a photo with your phone, it records the exact latitude and longitude of where you are standing. Similarly, drones capture images of power lines and record the direction and angle of the camera, providing a wealth of data points. This means each image isn’t just a static picture but a precise piece of the overall puzzle. Utilities can then access a specific part of the network remotely, viewing detailed images and data down to the centimetre.

    ©shutterstock/AI Image Generator

    The creation of a digital twin involves capturing vast amounts of data. A helicopter flying over a transmission circuit might scan billions of points, each reflecting off surfaces to create a detailed 3D model of the infrastructure. This data isn’t just a collection of points; it’s classified and structured to identify specific components like transmission towers, braces, and lines. This allows for the creation of vector models that accurately represent the physical assets.

    By structuring this data robustly, digital twins offer scalable solutions. For example, a project might involve 100 million high-resolution images, each tied to precise locations and points in space. These images, combined with point cloud data, enable utilities to correct outdated GIS data and create a comprehensive and accurate model of the entire network.

    One of the significant advantages of digital twins is the ability to measure and control infrastructure remotely. Utilities often struggle with locating specific components, such as service drops or transformers connecting houses to the grid. With a digital twin, anyone can search for a pole number and instantly access all relevant images and data. This capability eliminates the need for on-site visits, saving time and resources.

    What role does LIDAR data processing play in this context?

    The most impressive aspect of LIDAR technology is its precision in capturing and mapping the real world. When discussing accuracy—whether empirical or absolute—LIDAR stands out. This technology collects points with extreme accuracy, each associated with specific X, Y, and Z coordinates and the exact time the laser hits the surface. However, for these points to be useful, they must be transformed and placed within a trusted coordinate reference system. This transformation often requires professional surveyors to ensure the data aligns accurately with the real-world coordinates.

    LIDAR data and its processing are crucial for ensuring the absolute accuracy of digital twins. For instance, if we capture a utility pole using LIDAR, we can visualise and understand the infrastructure with remarkable precision.

    Utilities can use this data to identify the species of wood used for poles, ensuring they meet engineering specifications. This level of detail is often unknown after the poles are installed, but LIDAR processing allows us to classify these points and rebuild them into detailed 3D meshes.

    For example, if you want to know the top of a pole, you look for the point with the highest Z value. Similarly, the bottom of the pole has the lowest Z value. This data allows utilities to determine the lean of a pole by measuring the offset between the top and bottom.

    We apply the same principles to wires, reconstructing them using our LIDAR processing pipeline. By measuring the lowest Z value of a wire from the ground, we can identify clearance violations, which are critical for safety and regulatory compliance.

    How can predictive maintenance using these technologies reduce the risk of climate change-induced power grid outages?

    Climate change is a significant concern affecting all of us, especially the younger generations, who will face its consequences throughout their lives. The future in 30 years is uncertain due to these ongoing environmental changes. In my experience working on helicopter jobs, I’ve seen firsthand the disruptions caused by climate change. For example, entire projects are halted for weeks because of wildfires in the areas we are working in.

    Wildfires are influenced by numerous factors, many of which are related to utilities. For instance, a year with extreme weather fluctuations—where one year is particularly wet and the next is dry—results in substantial undergrowth that fuels even more intense fires. This situation is exacerbated by climate change and is genuinely tragic and challenging to discuss.

    Ignoring the need to scan and monitor our utility networks is no longer an option. In the past, it might have been possible to overlook these issues, but today, climate change and its direct and indirect effects make that impossible.

    drone surveying power grid
    ©shutterstock/AI Image Generator

    Utilities face the challenge of expanding and maintaining their grids without additional funding for these increased operations. This is where remote sensing and digital twin technology become crucial. By using LIDAR and other imaging technologies to scan and monitor the grid, utilities can stay ahead of power grid vulnerabilities.

    We are essentially sitting on a ticking time bomb, with the severity of potential accidents increasing due to climate change. By accurately measuring and controlling our infrastructure, we can at least mitigate some of the symptoms of these environmental challenges.

    What are the main challenges utilities face when integrating these new technologies?

    Humans are fascinating creatures, and as one myself, I notice we often stick to what we know best. This is evident in how we handle spreadsheets throughout our careers. We often abuse spreadsheets as if they were databases, only to panic when something inevitably goes wrong. This tendency to rely on familiar tools is widespread, especially in fields like utilities.

    For example, utilities often pay vendors to meticulously name files, treating them like rigid databases. If a project requires deliverables such as a KMZ file, along with thousands of images, each named according to specific criteria, it creates a cumbersome and error-prone process. Handling large-scale projects in this manner becomes increasingly impractical.

    The main issue with adopting new technologies often stems from traditional workplace values. When people arrive at their job sites, they feel they should already possess all the necessary knowledge. This mindset is even more pronounced among specialised workers, such as line workers with decades of experience. They may resist learning new methods, fearing it might undermine their expertise or appear unprofessional.

    This resistance is significant in the utility sector. Utilities tend to build solutions in-house, aiming to capitalise on internal resources. However, this approach often leaves them lagging behind professional software engineers who can develop scalable systems. Desktop GIS platforms, for instance, can manage a substantial amount of data, but scaling up becomes challenging. Rendering 100,000 points or lines might seem manageable, but when dealing with 500,000 poles, each with multiple images and millions of data points, the complexity skyrockets.

    Without retraining, utilities can’t effectively manage or build these systems. Budget constraints add to the challenge, making it difficult to address the myriad problems that arise. Humans are creatures of habit, often struggling to embrace new methods. This resistance, combined with a reluctance to adopt innovative software solutions, particularly in the utility sector, hinders progress. Embracing new technologies and retraining workers can significantly improve efficiency and scalability, ultimately leading to better outcomes.

    [ad_2]

    Source link

  • AI Is Heating the Olympic Pool

    AI Is Heating the Olympic Pool

    [ad_1]

    In the suburbs of northeast Paris there is a giant terracotta-colored warehouse, with a labyrinth of windowless corridors inside. A deafening whir emanates from behind rows and rows of anonymous gray doors and under white strip lights, disposable earbuds are available to protect passers-by from the noise.

    These are the uncanny innards of one of France’s newest data centers, completed earlier this year, which is now being used to heat the new Olympic Aquatics Center—visible from the data center’s roof. When US swimming star Katie Ledecky won her ninth Olympic gold medal last week, she did it by speeding through water heated, at least in part, by the data center’s machinery.

    Known as PA10, this noisy site belongs to the American data center company Equinix—the whirring sound is the company’s cooling systems trying to lower the temperature of its clients’ computer servers. “PA10 is especially made for high density racks,” says the site’s data center engineer Imane Erraji, pointing to a tower of servers capable of training AI.

    For the past month, the data center has turned its hot air waste into water and piped it to a local energy system run by French utility company Engie. Once it runs at full capacity, Equinix expects to export 6.6 thermal megawatts of heat out of the building—the equivalent of more than 1,000 homes.

    As projections suggest AI is about to turbocharge the amount of electricity data centers need—Equinix predicts power consumption per rack could rise by as much as 400 percent—PA10 reflects a European phenomenon whereby officials attempt to mitigate the environmental impact of the coming AI energy crunch and transform data centers into part of the infrastructure keeping cities warm.

    Erraji describes the project as a “win-win situation” for both Equinix and the local suburb of Seine-Saint-Denis. Equinix can pipe the heat out of the building so its cooling devices don’t have to work so hard, she explains, while the city gets a cheap source of heat produced locally. After the project received a €2 million ($2.1m) investment from the city of Paris, Equinix has committed to providing the energy free-of-charge for 15 years. In June, mayor of Seine-Saint-Denis, Mathieu Hanotin, also called attention to the environmental benefits, claiming that using the data center as an energy source will spare the region 1,800 tonnes of CO2 emissions per year.

    Yet France has a “very low-carbon electricity mix,” according to the International Energy Agency (IEA), with 62 percent of its electricity generated by nuclear power. And critics say multiplying heat-reuse projects are a distraction from the real issue: the amount of land, water and electricity data centers consume. “When the data centers are already here, of course it’s better to reuse the heat than do nothing,” says Anne-Laure Ligozat, computer science professor at France’s National School of Computer Science for Industry and Business (ENSIIE.) “But the problem is the number of data centers and their energy consumption.” There would be less of an environmental impact to to have a basic electricity heating system without the data center, she adds.

    [ad_2]

    Source link

  • How South Africa can move on from power cuts

    How South Africa can move on from power cuts

    [ad_1]

    South Africa is caught in an energy bind. From sunlight to wind and biomass, the country has an abundance of resources to generate renewable energy. But the nation’s power system is still largely reliant on fossil-fuel power plants, with scheduled power outages being the norm — until recently.

    In the run-up to the country’s elections in May, Eskom, the state-owned power company that supplies almost 80% of South Africa’s electricity, stopped load shedding — the practice of scheduling outages, each lasting several hours, to lessen demand on the country’s ageing energy infrastructure. As South Africa’s incoming government takes shape, President Cyril Ramaphosa has indicated that the load-shedding battle is not yet over. With a concerted effort from the government, I know that power outages need not resume.

    As a researcher working on energy optimization and the energy transition, I have studied the previous governmental efforts to end load shedding and found many ways through which the current energy system can be further optimized. More collaboration with consumers is also needed to better understand how, and when, they use electricity.

    South Africa’s energy crisis began in around 2007, when Eskom became unable to meet the country’s energy needs and had to implement power cuts to decrease demand on the energy system. Since 2019, these outages have escalated to the point that, in 2023, power was unavailable to South Africa’s population for 78% of the year (see go.nature.com/3szorvd).

    People and businesses have been hit hard. Many have faced insecurity and discomfort; appliances and electronics from refrigerators to laptops have been damaged; food has regularly gone to waste. Last winter, I endured cold nights with a sick infant, whose much-needed electric nebulizer to help treat pneumonia was rendered useless because of long power cuts. In townships, for example, by 2023, 64% of small businesses had to pause operations during periods of load shedding, 5% closed down altogether and 66% had to reduce employees’ working hours or even let them go.

    Over the past decade or so, the government has implemented various measures to reduce pressures on the power grid. It has incentivized private energy generation, as well as energy efficiency — for example, encouraging people to consume electricity during non-peak hours. Renewable energies, including photovoltaic power generation, are on the rise. Scheduled plant shutdowns have been delayed. Some power plants have been converted to run on gas rather than diesel, and maintenance has been improved. But these steps have not been enough to avoid load shedding, which is projected to continue beyond 2030.

    Meanwhile, wind and solar capacity has increased. But these resources are intermittent — and storage is costly. What’s more, solar- and wind-power generators are mostly located in areas with constrained grid capacity, so most of the energy produced cannot be transmitted widely.

    There are several ways for the government to expand its efforts. First, support Eskom’s existing power-generating capacity by combining real-time fault-detection monitoring with continuous preventive maintenance. Maintenance schedules should be updated to take into account the country’s ageing infrastructure, rising energy demand and greenhouse-gas-emissions targets, in accordance with the United Nations’ Sustainable Development Goals.

    Second, boost renewable-energy storage. Batteries are the most common storage option, and have been installed on Eskom’s Hex site in the Western Cape and in Elandskop, KwaZulu-Natal — but they are expensive. Other, potentially more sustainable options need to be explored. Pumped hydropower, for example — which stores water in two reservoirs at different elevations, generating power when water flows from one to the other — would work well. The method can stores more energy than batteries do and, importantly, store it over cycles lasting almost twice as long as those of most batteries.

    Third, optimize the mix of energy sources in the power grid. But to maximize the contribution of each type of energy, many factors need to be taken into account. For photovoltaic energy, for example, these include the Sun’s irradiance; the power generated by solar cells; consumer energy demand; the costs of generating solar- and coal-based energy; and the capacity for energy storage.

    Fourth, evaluate the robustness of the grid. Factors such as the type of technology used to generate energy, the generators’ locations, resource quantities, costs and demand vary. Models can aid grid assessment, planning and scheduling. They can help to optimize which type of energy (coal or solar, for example) should be dispatched at any time by quickly assessing the generator’s location, grid capacity in the area, quantity of energy generated and generation periods. Tools such as machine learning — which can process vast amounts of data in a short time — promise to boost the data-processing capability of such modelling.

    Fortunately, South Africa already has the knowledge and expertise needed to develop solutions that will put an end to load shedding. Let us keep optimizing, measuring and optimizing again.

    Competing Interests

    The author declares no competing interests.

    [ad_2]

    Source link

  • How light-based computers could cut AI’s energy needs

    How light-based computers could cut AI’s energy needs

    [ad_1]

    Download the Nature Podcast 31 July 2024

    In this episode:

    00:45 Increasing the energy efficiency of light-based computers

    Computer components based on specialized LEDs could reduce the energy consumption of power-hungry AI systems, according to new research. AI chips with components that compute using light can run more efficiently than those using digital electronics, but these light-based systems typically use lasers that can be bulky and difficult to control. To overcome these obstacles, a team has developed a way to replace these lasers with LEDs, which are cheaper and more efficient to run. Although only a proof of concept, they demonstrate that their system can perform some tasks as well as laser-based computers.

    Research Article: Dong et al.

    News and Views: Cheap light sources could make AI more energy efficient

    10:36 Research Highlights

    The genes that make roses smell so sweet, and how blocking inflammation could reduce heart injury after a stroke.

    Research Highlight: How the rose got its iconic fragrance

    Research Highlight: Strokes can damage the heart — but reining in the immune system might help

    13:02 What researchers know about H5N1 influenza in cows

    The highly pathogenic avian influenza H5N1 was first identified in US cattle in March 2024 and has been detected in multiple herds across the country. We round up what researchers currently know about this spread, what can be done to prevent it, and the risks this outbreak may pose to humans.

    Nature News: Can H5N1 spread through cow sneezes? Experiment offers clues

    Nature News: Huge amounts of bird-flu virus found in raw milk of infected cows

    Nature News: Could bird flu in cows lead to a human outbreak? Slow response worries scientists

    Research article: Eisfeld et al.

    22:38 Briefing Chat

    NASA’s Perseverance rover finds a Martian rock containing features associated with fossilized microbial life, and how metallic nodules on the ocean floor could be the source of mysterious ‘dark oxygen’.

    Space.com: NASA’s Perseverance Mars rover finds possible signs of ancient Red Planet life

    Nature News: Mystery oxygen source discovered on the sea floor — bewildering scientists

    Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday.

    Never miss an episode. Subscribe to the Nature Podcast on Apple Podcasts, Spotify, YouTube Music or your favourite podcast app. An RSS feed for the Nature Podcast is available too.

    [ad_2]

    Source link

  • A Breakthrough in Efficiency and Cost

    A Breakthrough in Efficiency and Cost

    [ad_1]

    Energy Fuel Gas Production Concept Art

    Gas separation is crucial across many industries but often involves energy-intensive processes, such as cooling gases to liquefy and then separate them based on their evaporation temperatures. However, Professor Wei Zhang and his team at the University of Colorado Boulder have developed a new type of porous material that is flexible, sustainable, and energy-efficient. This material can adjust its pore sizes at different temperatures to selectively allow certain gases to pass through, potentially revolutionizing the way gases are separated and reducing the overall energy required for these processes.

    A new porous material allows for efficient, low-energy gas separation and is scalable for industrial use, offering a sustainable alternative to traditional methods.

    Separating gases plays a crucial role in various industries, from medical applications, where nitrogen and oxygen are separated from air, to environmental processes like carbon capture, where carbon dioxide is isolated from other gases, and the purification of natural gas by removing impurities.

    Separating gases, however, can be both energy-intensive and expensive. “For example, when separating oxygen and nitrogen, you need to cool the air to very low temperatures until they liquefy. Then, by slowly increasing the temperature, the gases will evaporate at different points, allowing one to become a gas again and separate out,” explains Wei Zhang, a University of Colorado Boulder professor of chemistry and chair of the Department of Chemistry. “It’s very energy intensive and costly.”

    Much gas separation relies on porous materials through which gases pass and are separated. This, too, has long presented a problem, because these porous materials generally are specific to the types of gases being separated. Try sending any other types of gas through them and they don’t work.

    However, in research published on June 27 in the journal Science, Zhang and his co-researchers detail a new type of porous material that can accommodate and separate many different gases and is made from common, readily available materials. Further, it combines rigidity and flexibility in a way that allows size-based gas separation to happen at a greatly decreased energy cost.
    “We are trying to make technology better,” Zhang says, “and improve it in a way that’s scalable and sustainable.”

    Adding Flexibility

    For a long time, the porous materials used in gas separation have been rigid and affinity-based—specific to the types of gases being separated. The rigidity allows the pores to be well-defined and helps direct the gases in separating, but also limits the number of gases that can pass through because of varying molecule sizes.

    For several years, Zhang and his research group worked to develop a porous material that introduces an element of flexibility to a linking node in an otherwise rigid porous material. That flexibility allows the molecular linkers to oscillate, or move back and forth at a regular speed, changing the accessible pore size in the material and allowing it to be adapted to multiple gases.

    “We found that at room temperature, the pore is relatively the largest and the flexible linker barely moves, so most gases can get in,” Zhang says. “When we increase the temperature from room temperature to about 50 degrees (Celsius), oscillation of the linker becomes larger, causing effective pore size to shrink, so larger gases can’t get in. If we keep increasing the temperature, more gases are turned away due to increased oscillation and further reduced pore size. Finally, at 100 degrees, only the smallest gas, hydrogen, can pass through.”

    The material that Zhang and his colleagues developed is made of small organic molecules and is most analogous to zeolite, a family of porous, crystalline materials mostly comprised of silicon, aluminum, and oxygen. “It’s a porous material that has a lot of highly ordered pores,” he says. “You can picture it like a honeycomb. The bulk of it is solid organic material with these regular-sized pores that line up and form channels.”

    The researchers used a fairly new type of dynamic covalent chemistry that focuses on the boron-oxygen bond. Using a boron atom with four oxygen atoms around it, they took advantage of the reversibility of the bond between the boron and oxygen, which can break and reform again and again, thus enabling self-correcting, error-proof behavior and leading to the formation of structurally ordered frameworks.

    “We wanted to build something with tunability, with responsiveness, with adaptability, and we thought the boron-oxygen bond could be a good component to integrate into the framework we were developing, because of its reversibility and flexibility,” Zhang says.

    Sustainable Solutions

    Developing this new porous material did take time, Zhang says: “Making the material is easy and simple. The difficulty was at the very beginning, when we first obtained the material and needed to understand or elucidate its structure—how the bonds form, how angles form within this material, is it two-dimensional or three-dimensional. We had some challenges because the data looked promising, we just didn’t know how to explain it. It showed certain peaks (x-ray diffraction), but we could not immediately figure out what kind of structure those peaks corresponded to.”

    So, he and his research colleagues took a step back, which can be an important but little-discussed part of the scientific process. They focused on the small-molecule model system containing the same reactive sites as those in their material to understand how molecular building blocks packed in a solid state, and that helped explain the data.

    Zhang adds that he and his co-researchers considered scalability in developing this material, since its potential industrial uses would require large amounts, “and we believe this method is highly scalable. The building blocks are commercially available and not expensive, so it could be adopted by industry when the time is right.”

    They have applied for a patent on the material and are continuing the research with other building block materials to learn the substrate scope of this approach. Zhang also says he sees potential to partner with engineering researchers to integrate the material into membrane-based applications.

    “Membrane separations generally require much less energy, so in the long term they could be more sustainable solutions,” Zhang says. “Our goal is to improve technology to meet industry needs in sustainable ways.”

    Reference: “Molecular recognition with resolution below 0.2 angstroms through thermoregulatory oscillations in covalent organic frameworks” by Yiming Hu, Bratin Sengupta, Hai Long, Lacey J. Wayment, Richard Ciora, Yinghua Jin, Jingyi Wu, Zepeng Lei, Kaleb Friedman, Hongxuan Chen, Miao Yu and Wei Zhang, 27 June 2024, Science.
    DOI: 10.1126/science.adj8791



    [ad_2]

    Source link

  • Revolutionary Catalyst Coating Technology Skyrockets Fuel Cell Performance in Just 4 Minutes

    Revolutionary Catalyst Coating Technology Skyrockets Fuel Cell Performance in Just 4 Minutes

    [ad_1]

    Morphology Evolution of Oxide Nano Catalyst

    A collaborative research team has developed a new catalyst coating technology that enhances solid oxide fuel cell performance threefold in just four minutes, offering potential advancements in energy conversion technology. Credit: Korea Institute of Energy Research (KIER)

    A new oxide catalyst coating technique significantly enhances the performance of solid oxide fuel cells, tripling their efficiency. This breakthrough technology is versatile and can be applied to various applications, including solid oxide fuel cells and high-temperature electrolysis.

    Researchers have developed a groundbreaking catalyst coating technology for solid oxide fuel cells (SOFCs) that drastically enhances performance within just four minutes. The technology, which employs nanoscale praseodymium oxide catalysts, targets the oxygen reduction reaction at the air electrode, increasing the power output of SOFCs significantly. This new method, which is economical and compatible with existing manufacturing processes, promises broader applications, including high-temperature electrolysis for hydrogen production.

    Dr. Yoonseok Choi of the Hydrogen Convergence Materials Laboratory at the Korea Institute of Energy Research (KIER), along with Professor WooChul Jung from the Department of Materials Science and Engineering at KAIST and Professor Beom-Kyung Park from the Department of Materials Science and Engineering at Pusan National University, has successfully developed a catalyst coating technology that dramatically enhances the performance of solid oxide fuel cells (SOFCs) in just 4 minutes.

    Fuel cells are gaining attention as highly efficient and clean energy devices driving the hydrogen economy. Among them, solid oxide fuel cells (SOFCs), which have the highest power generation efficiency, can use various fuels such as hydrogen, biogas, and natural gas. They also allow for combined heat and power generation by utilizing the heat generated during the process, making them a subject of active research and development.

    Schematic Illustrations of the Electrochemical Coating Process on LSM–YSZ Electrode of SOFCs

    Schematic illustrations of the electrochemical coating process on LSM–YSZ electrode of SOFCs. Credit: Korea Institute of Energy Research (KIER)

    Challenges in SOFC Performance

    The performance of solid oxide fuel cells (SOFCs) is largely determined by the kinetics of oxygen reduction reaction (ORR) occurring at the air electrode (cathode). The reaction rate at the air electrode is slower than that of the fuel electrode (anode), thus limiting the overall reaction rate. To overcome this sluggish kinetics, researchers are developing new air electrode materials with high ORR activity. However, these new materials generally still lack chemical stability, requiring ongoing research.

    Yoon Seok Choi and Research Team

    Photo of the Joint Research Team (Yoon-Seok Choi, Senior Researcher, on the far right). Credit: Korea Institute of Energy Research (KIER)

    Instead, the research team focused on enhancing the performance of the LSM-YSZ composite electrode, a material widely used in industry due to its excellent stability. As a result, they developed a coating process for applying nanoscale praseodymium oxide (PrOx) catalysts on the surface of the composite electrode, which actively promotes the oxygen reduction reaction. By applying this coating process, they significantly improved the performance of solid oxide fuel cells.

    Simplified Electrochemical Deposition Method

    The research team introduced an electrochemical deposition method that operates at room temperature and atmospheric pressure, requiring no complex equipment or processes. By immersing the composite electrode in a solution containing praseodymium (Pr) ions and applying an electric current, hydroxide ions (OH-) generated at the electrode surface react with praseodymium ions, forming a precipitate that uniformly coats the electrode. This coating layer undergoes a drying process, transforming into an oxide that remains stable and effectively promotes the oxygen reduction reaction of the electrode in high-temperature environments. The entire coating process takes only 4 minutes.

    Additionally, the research team elucidated the mechanism by which the coated nano-catalyst promotes surface oxygen exchange and ionic conduction. They provided fundamental evidence that the catalyst coating method can address the low reaction rate of the composite electrode.

    By operating the developed catalyst-coated composite electrode and the conventional composite electrode for over 400 hours, the team observed that the polarization resistance was reduced tenfold. Additionally, the SOFC using this coated electrode exhibited a peak power density three times higher (142 mW/cm² → 418 mW/cm²) than that of the uncoated case, at 650 degrees Celsius. This represents the highest performance reported for SOFCs using LSM-YSZ composite electrodes in literature.

    Dr. Yoonseok Choi, co-corresponding author, stated, “The electrochemical deposition technique we developed is a post process that does not significantly impact the existing manufacturing process of SOFCs. This makes it economically viable for introducing oxide nano-catalysts, enhancing its industrial applicability.” He added, “We have secured a core technology that can be applied not only to SOFCs but also to various energy conversion devices, such as high-temperature electrolysis (SOEC) for hydrogen production.”

    Reference: “Revitalizing Oxygen Reduction Reactivity of Composite Oxide Electrodes via Electrochemically Deposited PrOx Nanocatalysts” by Seongwoo Nam, Jinwook Kim, Hyunseung Kim, Sejong Ahn, SungHyun Jeon, Yoonseok Choi, Beom-Kyeong Park and WooChul Jung, 22 March 2024, Advanced Materials.
    DOI: 10.1002/adma.202307286

    The study was conducted with support from the Ministry of Trade, Industry, and Energy’s Core Technology Development Program for New and Renewable Energy and the Ministry of Science and ICT’s Individual Basic Research Program.



    [ad_2]

    Source link

  • New Catalyst Unveils the Hidden Power of Water

    New Catalyst Unveils the Hidden Power of Water

    [ad_1]

    Hydrogen Production Art Concept

    Hydrogen is a key player in the effort to decarbonize our society, but most of its production currently relies on fossil fuel-derived processes like methane reforming, which emit significant carbon dioxide. The development of green hydrogen via water electrolysis, particularly through advanced technologies like proton-exchange-membrane (PEM), is hindered by the need for rare catalysts like iridium. However, a new breakthrough by ICFO researchers using an iridium-free catalyst shows promise for sustainable and efficient green hydrogen production at industrial scales, potentially revolutionizing the field. Credit: SciTechDaily.com

    Researchers have developed a breakthrough iridium-free catalyst for water electrolysis, paving the way for sustainable and large-scale green hydrogen production.

    Hydrogen offers significant potential as both a chemical and energy carrier for decarbonizing society. Unlike traditional fuels, using hydrogen does not produce carbon dioxide. However, most hydrogen currently produced derives from methane, a fossil fuel, through a process called methane reforming, which unfortunately emits a considerable amount of carbon dioxide. Consequently, developing scalable alternatives for producing green hydrogen is essential.

    Water electrolysis offers a path to generate green hydrogen which can be powered by renewables and clean electricity. This process needs cathode and anode catalysts to accelerate the otherwise inefficient reactions of water splitting and recombination into hydrogen and oxygen, respectively. From its early discovery in the late 18th century, the water electrolysis has matured into different technologies. One of the most promising implementations of water electrolysis is the proton-exchange-membrane (PEM), which can produce green hydrogen by combining high rates and high energy efficiency.

    PEM Water Electrolyzer Graphic

    Infograph that explains the concept of a PEM water electrolyzer, how it works, the new technique implemented by the team and the results they obtained. Credit: ICFO

    To date, water electrolysis, and in particular PEM, has required catalysts based on scarce, rare elements, such as platinum and iridium, among others. Only a few compounds combine the required activity and stability at the harsh chemical environment imposed by this reaction. This is especially challenging in the case of anode catalysts, which have to operate at highly corrosive acidic environments – conditions where only iridium oxides have shown stable operation at the required industrial conditions. But iridium is one of the scarcest elements on earth.

    In search of possible solutions, a team of scientists has recently taken an important step to find alternatives to iridium catalysts. This multidisciplinary team has managed to develop a novel way to confer activity and stability to an iridium-free catalyst by harnessing so far unexplored properties of water. The new catalyst achieves, for the first time, stability in PEM water electrolysis at industrial conditions without the use of iridium.

    This breakthrough, published in Science, has been carried out by ICFO researchers Ranit Ram, Dr. Lu Xia, Dr. Anku Guha, Dr. Viktoria Golovanova, Dr. Marinos Dimitropoulos, Aparna M. Das and Adrián Pinilla-Sánchez, and led by Professor at ICFO Dr. F. Pelayo García de Arquer; and includes important collaborations from the Institute of Chemical Research of Catalonia (ICIQ), The Catalan Institute of Science and Technology (ICN2), French National Center for Scientific Research (CNRS), Diamond Light Source, and the Institute of Advanced Materials (INAM).

    Dealing with the acidity

    Combining activity and stability in a highly acidic environment is challenging. Metals from the catalyst tend to dissolve, as most materials are not thermodynamically stable at low pH and applied potential, in a water environment. Iridium oxides combine activity and stability at these harsh conditions, and that is why they are the prevalent choice for anodes in proton-exchange water electrolysis.

    The search for alternatives to iridium is not only an important applied challenge, but a fundamental one. Intense research on the look for non-iridium catalysts has led to new insights on the reaction mechanisms and degradation, especially with the use of probes that could study the catalysts during operation combined with computational models. These led to promising results using manganese and cobalt oxide-based materials, and exploiting different structures, composition, and dopants, to modify the physicochemical properties of the catalysts.

    While insightful, most of these studies were performed in fundamental not-scalable reactors and operating at softer conditions that are far from the final application, especially in terms of current density. Demonstrating activity and stability with non-iridium catalysts in PEM reactors and at PEM-relevant operating conditions (high current density) had to date remained elusive.

    Lu Xia, Ranit Ram and Anku Guha

    From left to right: Lu Xia, Ranit Ram and Anku Guha, in the lab with the device. Credit: ICFO

    To overcome this, the ICFO, ICIQ, ICN2, CNRS, Diamond Light Source, and INAM researchers came up with a new approach in the design of non-iridium catalysts, achieving activity and stability in acid media. Their strategy, based on cobalt (very abundant and cheap), was quite different from the common paths.

    “Conventional catalyst design typically focuses on changing the composition or the structure of the employed materials. Here, we took a different approach. We designed a new material that actively involves the ingredients of the reaction (water and its fragments) in its structure. We found that the incorporation of water and water fragments into the catalyst structure can be tailored to shield the catalyst at these challenging conditions, thus enabling stable operation at the high current densities that are relevant for industrial applications,” explains Professor at ICFO, García de Arquer. With their technique, consisting in a delamination process that exchanges part of the material by water, the resulting catalyst presents as a viable alternative to iridium-based catalysts.

    A new approach: the delamination process

    To obtain the catalyst, the team looked into a particular cobalt oxide: cobalt-tungsten oxide (CoWO4), or in short CWO. On this starting material, they designed a delamination process using basic water solutions whereby tungsten oxides (WO42-) would be removed from the lattice and exchanged by water (H2O) and hydroxyl (OH) groups in a basic environment. This process could be tuned to incorporate different amounts of H2O and OH into the catalyst, which would then be incorporated onto the anode electrodes.

    The team combined different photon-based spectroscopies to understand this new class of material during operation. Using infrared Raman and x-rays, among others, they were able to assess the presence of trapped water and hydroxyl groups, and to obtain insights on their role in conferring activity and stability for water splitting in acid. “Being able to detect the trapped water was really challenging for us,” continues leading co-author Dr. Anku Guha. “Using Raman spectroscopy and other light-based techniques we finally saw that there was water in the sample. But it was not “free” water, it was confined water”; something that had a profound impact on performance.

    F. Pelayo García de Arquer, Marinos Dimitropoulos, Lu Xia, Aparna M. Das, Viktoria Holovanova, Anku Guha, and Ranit Ram

    ICFO family picture: from left to right: F. Pelayo García de Arquer, Marinos Dimitropoulos, Lu Xia, Aparna M. Das, Viktoria Holovanova, Anku Guha, and Ranit Ram. Credit: ICFO

    From these insights, they started working closely with collaborators and experts in catalyst modeling. “The modeling of activated materials is challenging as large structural rearrangements take place. In this case, the delamination employed in the activation treatment increases the number of active sites and changes the reaction mechanism rendering the material more active. Understanding these materials requires a detailed mapping between experimental observations and simulations,” says Prof. Núria López from ICIQ. Their calculations, led by a leading co-author Dr. Hind Benzidi, were crucial to understand how the delaminated materials, shielded by water, were not only thermodynamically protected against dissolution in highly acidic environments, but also active.

    But, how is this possible? Basically, the removal of tungsten-oxide leaves a hole behind, exactly where it was previously located. Here is where the “magic” happens: water and hydroxide, which are vastly present in the medium, spontaneously fill the gap. This in turn shields the sample, as it renders the cobalt dissolution an unfavorable process, effectively holding the catalyst components together.

    Then, they assembled the delaminated catalyst into a PEM reactor. The initial performance was truly remarkable, achieving higher activity and stability than any prior art. “We increased five times the current density, arriving to 1 A/cm2 – a very challenging landmark in the field. But, the key is, that we also reached more than 600 hours of stability at such high density. So, we have reached the highest current density and also the highest stability for non-iridium catalysts,” shares leading co-author Dr. Lu Xia.

    “At the beginning of the project, we were intrigued about the potential role of water itself as the elephant in the room in water electrolysis,” explains Ranit Ram, first author of the study and instigator of the initial idea. “No one before had actively tailored water and interfacial water in this way.” In the end, it turned out to be a real game-changer.

    Even though the stability time is still far from the current industrial PEMs, this represents a big step towards making them not dependent on iridium or similar elements. In particular, their work brings new insights for water electrolysis PEMs design, as it highlights the potential to address catalyst engineering from another perspective; by actively exploiting the properties of water.

    Towards the industrialization

    The team has seen such potential in the technique that they have already applied for a patent, with the aim of scaling it up to industry levels of production. Yet, they are aware of the non-triviality of taking this step, as Prof. García de Arquer notices: “Cobalt, being more abundant than iridium, is still a very troubling material considering from where it is obtained. That is why we are working on alternatives based on manganese, nickel, and many other materials. We will go through the whole the periodic table, if necessary. And we are going to explore and try with them this new strategy to design catalysts that we have reported in our study.”

    Despite the new challenges that will for sure arise, the team is convinced of the potential of this delamination process and they are all determined to pursue this goal. Ram, in particular, shares: “I have actually always wanted to advance renewable energies because it will help us as a human community to fight against climate change. I believe our studies contributed one small step in the right direction.”

    Reference: “Water-hydroxide trapping in cobalt tungstate for proton exchange membrane water electrolysis” by Ranit Ram, Lu Xia, Hind Benzidi, Anku Guha, Viktoria Golovanova, Alba Garzón Manjón, David Llorens Rauret, Pol Sanz Berman, Marinos Dimitropoulos, Bernat Mundet, Ernest Pastor, Veronica Celorrio, Camilo A. Mesa, Aparna M. Das, Adrián Pinilla-Sánchez, Sixto Giménez, Jordi Arbiol, Núria López and F. Pelayo García de Arquer, 20 June 2024, Science.
    DOI: 10.1126/science.adk9849

    Funding: European Commission, “la Caixa” Foundation, Generalitat de Catalunya, Ministry of Science and Innovation, Fundación BBVA



    [ad_2]

    Source link