Tag: quanta magazine

  • The Quantum Mechanics of the Greenhouse Effect

    [ad_1]

    A key question was the origin of the logarithmic scaling of the greenhouse effect—the 2-to-5-degree temperature rise that models predict will happen for every doubling of CO2. One theory held that the scaling comes from how quickly the temperature drops with altitude. But in 2022, a team of researchers used a simple model to prove that the logarithmic scaling comes from the shape of carbon dioxide’s absorption “spectrum”—how its ability to absorb light varies with the light’s wavelength.

    This goes back to those wavelengths that are slightly longer or shorter than 15 microns. A critical detail is that carbon dioxide is worse—but not too much worse—at absorbing light with those wavelengths. The absorption falls off on either side of the peak at just the right rate to give rise to the logarithmic scaling.

    “The shape of that spectrum is essential,” said David Romps, a climate physicist at the University of California, Berkeley, who coauthored the 2022 paper. “If you change it, you don’t get the logarithmic scaling.”

    The carbon spectrum’s shape is unusual—most gases absorb a much narrower range of wavelengths. “The question I had at the back of my mind was: Why does it have this shape?” Romps said. “But I couldn’t put my finger on it.”

    Consequential Wiggles

    Wordsworth and his coauthors Jacob Seeley and Keith Shine turned to quantum mechanics to find the answer.

    Light is made of packets of energy called photons. Molecules like CO2 can absorb them only when the packets have exactly the right amount of energy to bump the molecule up to a different quantum mechanical state.

    Carbon dioxide usually sits in its “ground state,” where its three atoms form a line with the carbon atom in the center, equidistant from the others. The molecule has “excited” states as well, in which its atoms undulate or swing about.

    Image may contain Sphere Egg Food and Balloon

    A photon of 15-micron light contains the exact energy required to set the carbon atom swirling about the center point in a sort of hula-hoop motion. Climate scientists have long blamed this hula-hoop state for the greenhouse effect, but—as Ångström anticipated—the effect requires too precise an amount of energy, Wordsworth and his team found. The hula-hoop state can’t explain the relatively slow decline in the absorption rate for photons further from 15 microns, so it can’t explain climate change by itself.

    The key, they found, is another type of motion, where the two oxygen atoms repeatedly bob toward and away from the carbon center, as if stretching and compressing a spring connecting them. This motion takes too much energy to be induced by Earth’s infrared photons on their own.

    But the authors found that the energy of the stretching motion is so close to double that of the hula-hoop motion that the two states of motion mix with one another. Special combinations of the two motions exist, requiring slightly more or less than the exact energy of the hula-hoop motion.

    This unique phenomenon is called Fermi resonance after the famous physicist Enrico Fermi, who derived it in a 1931 paper. But its connection to Earth’s climate was only made for the first time in a paper last year by Shine and his student, and the paper this spring is the first to fully lay it bare.

    [ad_2]

    Source link

  • The Vacuum of Space Will Decay Sooner Than Expected

    The Vacuum of Space Will Decay Sooner Than Expected

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Vacuum decay, a process that could end the universe as we know it, may happen 10,000 times sooner than expected. Fortunately, it still won’t happen for a very, very long time.

    When physicists speak of “the vacuum,” the term sounds as though it refers to empty space, and in a sense it does. More specifically, it refers to a set of defaults, like settings on a control board. When the quantum fields that permeate space sit at these default values, you consider space to be empty. Small tweaks to the settings create particles—turn the electromagnetic field up a bit, and you get a photon. Big tweaks, on the other hand, are best thought of as new defaults altogether. They create a different definition of empty space, with different traits.

    One quantum field is special because its default value can change. Called the Higgs field, it controls the mass of many fundamental particles, like electrons and quarks. Unlike every other quantum field physicists have discovered, the Higgs field has a default value above zero. Dialing the Higgs field value up or down would increase or decrease the mass of electrons and other particles. If the setting of the Higgs field were zero, those particles would be massless.

    We could stay at the nonzero default for eternity, were it not for quantum mechanics. A quantum field can “tunnel,” jumping to a new, lower-energy value even if it doesn’t have enough energy to pass through the higher-energy intermediate settings, an effect akin to tunneling through a solid wall.

    For this to happen, you need to have a lower-energy state to tunnel to. And before building the Large Hadron Collider, physicists thought that the current state of the Higgs field could be the lowest. That belief has now changed.

    The curve that represents the energy required for different settings of the Higgs field was always known to resemble a sombrero with an upturned brim. The current setting of the Higgs field can be pictured as a ball resting at the bottom of the brim.

    Image may contain Chart and Plot

    Ilustration: Credit: Mark Belan for Quanta Magazine

    [ad_2]

    Source link

  • The Physics of Cold Water May Have Jump-Started Complex Life

    The Physics of Cold Water May Have Jump-Started Complex Life

    [ad_1]

    After 30 days, the algae in the middle were still unicellular. As the scientists put algae from thicker and thicker rings under the microscope, however, they found larger clumps of cells. The very largest were wads of hundreds. But what interested Simpson the most were mobile clusters of four to 16 cells, arranged so that their flagella were all on the outside. These clusters moved around by coordinating the movement of their flagella, the ones at the back of the cluster holding still, the ones at the front wriggling.

    Comparing the speed of these clusters to the single cells in the middle revealed something interesting. “They all swim at the same speed,” Simpson said. By working together as a collective, the algae could preserve their mobility. “I was really pleased,” he said. “With the coarse mathematical framework, there were a few predictions I could make. To actually see it empirically means there’s something to this idea.”

    Intriguingly, when the scientists took these little clusters from the high-viscosity gel and put them back at low viscosity, the cells stuck together. They remained this way, in fact, for as long as the scientists continued to watch them, about 100 more generations. Clearly, whatever changes they underwent to survive at high viscosity were hard to reverse, Simpson said—perhaps a move toward evolution rather than a short-term shift.

    ILLUSTRATION
    Caption: In gel as viscous as ancient oceans, algal cells began working together. They clumped up and coordinated the movements of their tail-like flagella to swim more quickly. When placed back in normal viscosity, they remained together.
    Credit: Andrea Halling

    Modern-day algae are not early animals. But the fact that these physical pressures forced a unicellular creature into an alternate way of life that was hard to reverse feels quite powerful, Simpson said. He suspects that if scientists explore the idea that when organisms are very small, viscosity dominates their existence, we could learn something about conditions that might have led to the explosion of large forms of life.

    A Cell’s Perspective

    As large creatures, we don’t think much about the thickness of the fluids around us. It’s not a part of our daily lived experience, and we are so big that viscosity doesn’t impinge on us very much. The ability to move easily—relatively speaking—is something we take for granted. From the time Simpson first realized that such limits on movement could be a monumental obstacle to microscopic life, he hasn’t been able to stop thinking about it. Viscosity may have mattered quite a lot in the origins of complex life, whenever that was.

    “[This perspective] allows us to think about the deep-time history of this transition,” Simpson said, “and what was going on in Earth’s history when all the obligately complicated multicellular groups evolved, which is relatively close to each other, we think.”

    Other researchers find Simpson’s ideas quite novel. Before Simpson, no one seems to have thought very much about organisms’ physical experience of being in the ocean during Snowball Earth, said Nick Butterfield of the University of Cambridge, who studies the evolution of early life. He cheerfully noted, however, that “Carl’s idea is fringe.” That’s because the vast majority of theories about Snowball Earth’s influence on the evolution of multicellular animals, plants, and algae focus on how levels of oxygen, inferred from isotope levels in rocks, could have tipped the scales in one way or another, he said.

    [ad_2]

    Source link

  • ‘Gem’ of a Proof Breaks 80-Year-Old Record, Offers New Insights Into Prime Numbers

    ‘Gem’ of a Proof Breaks 80-Year-Old Record, Offers New Insights Into Prime Numbers

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Sometimes mathematicians try to tackle a problem head on, and sometimes they come at it sideways. That’s especially true when the mathematical stakes are high, as with the Riemann hypothesis, whose solution comes with a $1 million reward from the Clay Mathematics Institute. Its proof would give mathematicians much deeper certainty about how prime numbers are distributed, while also implying a host of other consequences—making it arguably the most important open question in math.

    Mathematicians have no idea how to prove the Riemann hypothesis. But they can still get useful results just by showing that the number of possible exceptions to it is limited. “In many cases, that can be as good as the Riemann hypothesis itself,” said James Maynard of the University of Oxford. “We can get similar results about prime numbers from this.”

    In a breakthrough result posted online in May, Maynard and Larry Guth of the Massachusetts Institute of Technology established a new cap on the number of exceptions of a particular type, finally beating a record that had been set more than 80 years earlier. “It’s a sensational result,” said Henryk Iwaniec of Rutgers University. “It’s very, very, very hard. But it’s a gem.”

    The new proof automatically leads to better approximations of how many primes exist in short intervals on the number line, and stands to offer many other insights into how primes behave.

    A Careful Sidestep

    The Riemann hypothesis is a statement about a central formula in number theory called the Riemann zeta function. The zeta (ζ) function is a generalization of a straightforward sum:

    1 + 1/2 + 1/3 + 1/4 + 1/5 + ⋯.

    This series will become arbitrarily large as more and more terms are added to it—mathematicians say that it diverges. But if instead you were to sum up

    1 + 1/22 + 1/32 + 1/42 + 1/52 + ⋯ = 1 + 1/4 + 1/9+ 1/16 + 1/25 +⋯

    you would get π2/6, or about 1.64. Riemann’s surprisingly powerful idea was to turn a series like this into a function, like so:

    ζ(s) = 1 + 1/2s + 1/3s + 1/4s + 1/5s + ⋯.

    So ζ(1) is infinite, but ζ(2) = π2/6.

    Things get really interesting when you let s be a complex number, which has two parts: a “real” part, which is an everyday number, and an “imaginary” part, which is an everyday number multiplied by the square root of −1 (or i, as mathematicians write it). Complex numbers can be plotted on a plane, with the real part on the x-axis and the imaginary part on the y-axis. Here, for example, is 3 + 4i.

    Image may contain Chart Plot and Text

    Graph: Mark Belan for Quanta Magazine

    [ad_2]

    Source link

  • What Came Before the Big Bang?

    What Came Before the Big Bang?

    [ad_1]

    Robert Brandenberger, a physicist at McGill University who was not involved with the study, said the new paper “sets a new standard of rigor for the analysis” of the mathematics of the beginning of time. In some cases, what appears at first to be a singularity—a point in space-time where mathematical descriptions lose their meaning—may in fact be an illusion.

    A Taxonomy of Singularities

    The central issue confronting Geshnizjani, Ling, and Quintin is whether there is a point prior to inflation at which the laws of gravity break down in a singularity. The simplest example of a mathematical singularity is what happens to the function 1/x as x approaches zero. The function takes a number x as an input, and outputs another number. As x gets smaller and smaller, 1/x gets larger and larger, approaching infinity. If x is zero, the function is no longer well defined: It can’t be relied upon as a description of reality.

    Image may contain Clothing Sleeve Adult Person Head and Face

    “We mathematically showed that there might be a way to see beyond our universe,” said Eric Ling of the University of Copenhagen.

    Photograph: Annachiara Piubello

    Sometimes, however, mathematicians can get around a singularity. For example, consider the prime meridian, which passes through Greenwich, England, at longitude zero. If you had a function of 1/longitude, it would go berserk in Greenwich. But there’s not actually anything physically special about suburban London: You could easily redefine zero longitude to pass through some other place on Earth, and then your function would behave perfectly normally when approaching the Royal Observatory in Greenwich.

    Something similar happens at the boundary of mathematical models of black holes. The equations that describe spherical nonrotating black holes, worked out by the physicist Karl Schwarzschild in 1916, have a term whose denominator goes to zero at the event horizon of the black hole—the surface surrounding a black hole beyond which nothing can escape. That led physicists to believe that the event horizon was a physical singularity. But eight years later the astronomer Arthur Eddington showed that if a different set of coordinates is used, the singularity disappears. Like the prime meridian, the event horizon is an illusion: a mathematical artifact called a coordinate singularity, which only arises because of the choice of coordinates.

    At a black hole’s center, by contrast, the density and curvature go to infinity in a way that can’t be eliminated by using a different coordinate system. The laws of general relativity start spewing out gibberish. This is called a curvature singularity. It implies that something is taking place that’s beyond the ability of current physical and mathematical theories to describe.

    [ad_2]

    Source link

  • Light-Based Chips Could Help Slake AI’s Ever-Growing Thirst for Energy

    Light-Based Chips Could Help Slake AI’s Ever-Growing Thirst for Energy

    [ad_1]

    “What we have here is something incredibly simple,” said Tianwei Wu, the study’s lead author. “We can reprogram it, changing the laser patterns on the fly.” The researchers used the system to design a neural network that successfully discriminated vowel sounds. Most photonic systems need to be trained before they’re built, since training necessarily involves reconfiguring connections. But since this system is easily reconfigured, the researchers trained the model after it was installed on the semiconductor. They now plan to increase the size of the chip and encode more information in different colors of light, which should increase the amount of data it can handle.

    It’s progress that even Psaltis, who built the facial recognition system in the ’90s, finds impressive. “Our wildest dreams of 40 years ago were very modest compared to what has actually transpired.”

    First Rays of Light

    While optical computing has advanced quickly over the past several years, it’s still far from displacing the electronic chips that run neural networks outside of labs. Papers announce photonic systems that work better than electronic ones, but they generally run small models using old network designs and small workloads. And many of the reported figures about photonic supremacy don’t tell the whole story, said Bhavin Shastri of Queen’s University in Ontario. “It’s very hard to do an apples-to-apples comparison with electronics,” he said. “For instance, when they use lasers, they don’t really talk about the energy to power the lasers.”

    Lab systems need to be scaled up before they can show competitive advantages. “How big do you have to make it to get a win?” McMahon asked. The answer: exceptionally big. That’s why no one can match a chip made by Nvidia, whose chips power many of the most advanced AI systems today. There is a huge list of engineering puzzles to figure out along the way—issues that the electronics side has solved over decades. “Electronics is starting with a big advantage,” said McMahon.

    Some researchers think ONN-based AI systems will first find success in specialized applications where they provide unique advantages. Shastri said one promising use is in counteracting interference between different wireless transmissions, such as 5G cellular towers and the radar altimeters that help planes navigate. Early this year, Shastri and several colleagues created an ONN that can sort out different transmissions and pick out a signal of interest in real time and with a processing delay of under 15 picoseconds (15 trillionths of a second)—less than one-thousandth of the time an electronic system would take, while using less than 1/70 of the power.

    But McMahon said the grand vision—an optical neural network that can surpass electronic systems for general use—remains worth pursuing. Last year his group ran simulations showing that, within a decade, a sufficiently large optical system could make some AI models more than 1,000 times as efficient as future electronic systems. “Lots of companies are now trying hard to get a 1.5-times benefit. A thousand-times benefit, that would be amazing,” he said. “This is maybe a 10-year project—if it succeeds.”


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    [ad_2]

    Source link

  • How Game Theory Can Make AI More Reliable

    How Game Theory Can Make AI More Reliable

    [ad_1]

    Posing a far greater challenge for AI researchers was the game of Diplomacy—a favorite of politicians like John F. Kennedy and Henry Kissinger. Instead of just two opponents, the game features seven players whose motives can be hard to read. To win, a player must negotiate, forging cooperative arrangements that anyone could breach at any time. Diplomacy is so complex that a group from Meta was pleased when, in 2022, its AI program Cicero developed “human-level play” over the course of 40 games. While it did not vanquish the world champion, Cicero did well enough to place in the top 10 percent against human participants.

    During the project, Jacob—a member of the Meta team—was struck by the fact that Cicero relied on a language model to generate its dialog with other players. He sensed untapped potential. The team’s goal, he said, “was to build the best language model we could for the purposes of playing this game.” But what if instead they focused on building the best game they could to improve the performance of large language models?

    Consensual Interactions

    In 2023, Jacob began to pursue that question at MIT, working with Yikang Shen, Gabriele Farina, and his adviser, Jacob Andreas, on what would become the consensus game. The core idea came from imagining a conversation between two people as a cooperative game, where success occurs when a listener understands what a speaker is trying to convey. In particular, the consensus game is designed to align the language model’s two systems—the generator, which handles generative questions, and the discriminator, which handles discriminative ones.

    After a few months of stops and starts, the team built this principle up into a full game. First, the generator receives a question. It can come from a human or from a preexisting list. For example, “Where was Barack Obama born?” The generator then gets some candidate responses, let’s say Honolulu, Chicago, and Nairobi. Again, these options can come from a human, a list, or a search carried out by the language model itself.

    But before answering, the generator is also told whether it should answer the question correctly or incorrectly, depending on the results of a fair coin toss.

    If it’s heads, then the machine attempts to answer correctly. The generator sends the original question, along with its chosen response, to the discriminator. If the discriminator determines that the generator intentionally sent the correct response, they each get one point, as a kind of incentive.

    If the coin lands on tails, the generator sends what it thinks is the wrong answer. If the discriminator decides it was deliberately given the wrong response, they both get a point again. The idea here is to incentivize agreement. “It’s like teaching a dog a trick,” Jacob explained. “You give them a treat when they do the right thing.”

    The generator and discriminator also each start with some initial “beliefs.” These take the form of a probability distribution related to the different choices. For example, the generator may believe, based on the information it has gleaned from the internet, that there’s an 80 percent chance Obama was born in Honolulu, a 10 percent chance he was born in Chicago, a 5 percent chance of Nairobi, and a 5 percent chance of other places. The discriminator may start off with a different distribution. While the two “players” are still rewarded for reaching agreement, they also get docked points for deviating too far from their original convictions. That arrangement encourages the players to incorporate their knowledge of the world—again drawn from the internet—into their responses, which should make the model more accurate. Without something like this, they might agree on a totally wrong answer like Delhi, but still rack up points.

    [ad_2]

    Source link

  • The Hunt for Ultralight Dark Matter

    The Hunt for Ultralight Dark Matter

    [ad_1]

    If or when SLAC’s planned project, the Light Dark Matter Experiment (LDMX), receives funding—a decision from the Department of Energy is expected in the next year or so—it will scan for light dark matter. The experiment is designed to accelerate electrons toward a target made of tungsten in End Station A. In the vast majority of collisions between a speeding electron and a tungsten nucleus, nothing interesting will happen. But rarely—on the order of once every 10,000 trillion hits, if light dark matter exists—the electron will instead interact with the nucleus via the unknown dark force to produce light dark matter, significantly draining the electron’s energy.

    That 10,000 trillion is actually the worst-case scenario for light dark matter. It’s the lowest rate at which you can produce dark matter to match thermal-relic measurements. But Schuster says light dark matter might arise in upward of one in every 100 billion impacts. If so, then with the planned collision rate of the experiment, “that’s an inordinate amount of dark matter that you can produce.”

    LDMX will need to run for three to five years, Nelson said, to definitively detect or rule out thermal relic light dark matter.

    Ultralight Dark Matter

    Other dark matter hunters have their experiments tuned for a different candidate. Ultralight dark matter is axionlike but no longer obliged to solve the strong CP problem. Because of this, it can be much more lightweight than ordinary axions, as light as 10 billionths of a trillionth of the electron’s mass. That tiny mass corresponds to a wave with a vast wavelength, as long as a small galaxy. In fact, the mass can’t be any smaller because if it were, the even longer wavelengths would mean that dark matter could not be concentrated around galaxies, as astronomers observe.

    Ultralight dark matter is so incredibly minuscule that the dark-force particle needed to mediate its interactions is thought to be massive. “There’s no name given to these mediators,” Schuster said, “because it’s outside of any possible experiment. It has to be there [in the theory] for consistency, but we don’t worry about them.”

    The origin story for ultralight dark matter particles depends on the particular theoretical model, but Toro says they would have arisen after the Big Bang, so the thermal-relic argument is irrelevant. There’s a different motivation for thinking about them. The particles naturally follow from string theory, a candidate for the fundamental theory of physics. These feeble particles arise from the ways that six tiny dimensions might be curled up or “compactified” at each point in our 4D universe, according to string theory. “The existence of light axionlike particles is strongly motivated by many kinds of string compactifications,” said Jessie Shelton, a physicist at the University of Illinois, “and it’s something that we should take seriously.”

    Rather than trying to create dark matter using an accelerator, experiments looking for axions and ultralight dark matter listen for the dark matter that supposedly surrounds us. Based on its gravitational effects, dark matter seems to be distributed most densely near the Milky Way’s center, but one estimate suggests that even out here on Earth, we can expect dark matter to have a density of almost half a proton’s mass per cubic centimeter. Experiments try to detect this ever-present dark matter using powerful magnetic fields. In theory, the ethereal dark matter will occasionally absorb a photon from the strong magnetic field and convert it into a microwave photon, which an experiment can detect.

    [ad_2]

    Source link

  • Does String Theory Actually Describe the World? AI May Be Able to Tell

    Does String Theory Actually Describe the World? AI May Be Able to Tell

    [ad_1]

    A group led by string theory veterans Burt Ovrut of the University of Pennsylvania and Andre Lukas of Oxford went further. They too started with Ruehle’s metric-calculating software, which Lukas had helped develop. Building on that foundation, they added an array of 11 neural networks to handle the different types of sprinkles. These networks allowed them to calculate an assortment of fields that could take on a richer variety of shapes, creating a more realistic setting that can’t be studied with any other techniques. This army of machines learned the metric and the arrangement of the fields, calculated the Yukawa couplings, and spit out the masses of three types of quarks. It did all this for six differently shaped Calabi-Yau manifolds. “This is the first time anybody has been able to calculate them to that degree of accuracy,” Anderson said.

    None of those Calabi-Yaus underlies our universe, because two of the quarks have identical masses, while the six varieties in our world come in three tiers of masses. Rather, the results represent a proof of principle that machine-learning algorithms can take physicists from a Calabi-Yau manifold all the way to specific particle masses.

    “Until now, any such calculations would have been unthinkable,” said Constantin, a member of the group based at Oxford.

    Numbers Game

    The neural networks choke on doughnuts with more than a handful of holes, and researchers would eventually like to study manifolds with hundreds. And so far, the researchers have considered only rather simple quantum fields. To go all the way to the standard model, Ashmore said, “you might need a more sophisticated neural network.”

    Bigger challenges loom on the horizon. Attempting to find our particle physics in the solutions of string theory—if it’s in there at all—is a numbers game. The more sprinkle-laden doughnuts you can check, the more likely you are to find a match. After decades of effort, string theorists can finally check doughnuts and compare them with reality: the masses and couplings of the elementary particles we observe. But even the most optimistic theorists recognize that the odds of finding a match by blind luck are cosmically low. The number of Calabi-Yau doughnuts alone may be infinite. “You need to learn how to game the system,” Ruehle said.

    One approach is to check thousands of Calabi-Yau manifolds and try to suss out any patterns that could steer the search. By stretching and squeezing the manifolds in different ways, for instance, physicists might develop an intuitive sense of what shapes lead to what particles. “What you really hope is that you have some strong reasoning after looking at particular models,” Ashmore said, “and you stumble into the right model for our world.”

    Lukas and colleagues at Oxford plan to start that exploration, prodding their most promising doughnuts and fiddling more with the sprinkles as they try to find a manifold that produces a realistic population of quarks. Constantin believes that they will find a manifold reproducing the masses of the rest of the known particles in a matter of years.

    Other string theorists, however, think it’s premature to start scrutinizing individual manifolds. Thomas Van Riet of KU Leuven is a string theorist pursuing the “swampland” research program, which seeks to identify features shared by all mathematically consistent string theory solutions—such as the extreme weakness of gravity relative to the other forces. He and his colleagues aspire to rule out broad swaths of string solutions—that is, possible universes—before they even start to think about specific doughnuts and sprinkles.

    “It’s good that people do this machine-learning business, because I’m sure we will need it at some point,” Van Riet said. But first “we need to think about the underlying principles, the patterns. What they’re asking about is the details.”

    [ad_2]

    Source link

  • The Complex Social Lives of Viruses

    The Complex Social Lives of Viruses

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Ever since viruses came to light in the late 1800s, scientists have set them apart from the rest of life. Viruses were far smaller than cells, and inside their protein shells they carried little more than genes. They could not grow, copy their own genes, or do much of anything. Researchers assumed that each virus was a solitary particle drifting alone through the world, able to replicate only if it happened to bump into the right cell that could take it in.

    This simplicity was what attracted many scientists to viruses in the first place, said Marco Vignuzzi, a virologist at the Singapore Agency for Science, Research and Technology Infectious Diseases Labs. “We were trying to be reductionist.”

    That reductionism paid off. Studies on viruses were crucial to the birth of modern biology. Lacking the complexity of cells, they revealed fundamental rules about how genes work. But viral reductionism came at a cost, Vignuzzi said: By assuming viruses are simple, you blind yourself to the possibility that they might be complicated in ways you don’t know about yet.

    For example, if you think of viruses as isolated packages of genes, it would be absurd to imagine them having a social life. But Vignuzzi and a new school of like-minded virologists don’t think it’s absurd at all. In recent decades, they have discovered some strange features of viruses that don’t make sense if viruses are lonely particles. They instead are uncovering a marvelously complex social world of viruses. These sociovirologists, as the researchers sometimes call themselves, believe that viruses make sense only as members of a community.

    Granted, the social lives of viruses aren’t quite like those of other species. Viruses don’t post selfies to social media, volunteer at food banks, or commit identity theft like humans do. They don’t fight with allies to dominate a troop like baboons; they don’t collect nectar to feed their queen like honeybees; they don’t even congeal into slimy mats for their common defense like some bacteria do. Nevertheless, sociovirologists believe that viruses do cheat, cooperate, and interact in other ways with their fellow viruses.

    The field of sociovirology is still young and small. The first conference dedicated to the social life of viruses took place in 2022, and the second will take place this June. A grand total of 50 people will be in attendance. Still, sociovirologists argue that the implications of their new field could be profound. Diseases like influenza don’t make sense if we think of viruses in isolation from one another. And if we can decipher the social life of viruses, we might be able to exploit it to fight back against the diseases some of them create.

    Under Our Noses

    Some of the most important evidence for the social life of viruses has been sitting in plain view for nearly a century. After the discovery of the influenza virus in the early 1930s, scientists figured out how to grow stocks of the virus by injecting it into a chicken egg and letting it multiply inside. The researchers could then use the new viruses to infect lab animals for research or inject them into new eggs to keep growing new viruses.

    In the late 1940s, the Danish virologist Preben von Magnus was growing viruses when he noticed something odd. Many of the viruses produced in one egg could not replicate when he injected them into another. By the third cycle of transmission, only one in 10,000 viruses could still replicate. But in the cycles that followed, the defective viruses became rarer and the replicating ones bounced back. Von Magnus suspected that the viruses that couldn’t replicate had not finished developing, and so he called them “incomplete.”

    [ad_2]

    Source link