Tag: quanta magazine

  • This Is What Your Brain Does When You’re Not Doing Anything

    This Is What Your Brain Does When You’re Not Doing Anything

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Whenever you’re actively performing a task—say, lifting weights at the gym or taking a hard exam—the parts of your brain required to carry it out become “active” when neurons step up their electrical activity. But is your brain active even when you’re zoning out on the couch?

    The answer, researchers have found, is yes. Over the past two decades they’ve defined what’s known as the default mode network, a collection of seemingly unrelated areas of the brain that activate when you’re not doing much at all. Its discovery has offered insights into how the brain functions outside of well-defined tasks and has also prompted research into the role of brain networks—not just brain regions—in managing our internal experience.

    In the late 20th century, neuroscientists began using new techniques to take images of people’s brains as they performed tasks in scanning machines. As expected, activity in certain brain areas increased during tasks—and to the researchers’ surprise, activity in other brain areas declined simultaneously. The neuroscientists were intrigued that during a wide variety of tasks, the very same brain areas consistently dialed back their activity.

    It was as if these areas had been active when the person wasn’t doing anything, and then turned off when the mind had to concentrate on something external.

    Researchers called these areas “task negative.” When they were first identified, Marcus Raichle, a neurologist at the Washington University School of Medicine in St. Louis, suspected that these task-negative areas play an important role in the resting mind. “This raised the question of ‘What’s baseline brain activity?’” Raichle recalled. In an experiment, he asked people in scanners to close their eyes and simply let their minds wander while he measured their brain activity.

    He found that during rest, when we turn mentally inward, task-negative areas use more energy than the rest of the brain. In a 2001 paper, he dubbed this activity “a default mode of brain function.” Two years later, after generating higher-resolution data, a team from the Stanford University School of Medicine discovered that this task-negative activity defines a coherent network of interacting brain regions, which they called the default mode network.

    The discovery of the default mode network ignited curiosity among neuroscientists about what the brain is doing in the absence of an outward-focused task. Although some researchers believed that the network’s main function was to generate our experience of mind wandering or daydreaming, there were plenty of other conjectures. Maybe it controlled streams of consciousness or activated memories of past experiences. And dysfunction in the default mode network was floated as a potential feature of nearly every psychiatric and neurological disorder, including depression, schizophrenia, and Alzheimer’s disease.

    Since then, a flurry of research into the default mode has complicated that initial understanding. “It’s been very interesting to see the types of different tasks and paradigms that engage the default mode network in the past 20 years,” said Lucina Uddin, a neuroscientist at the University of California, Los Angeles.

    [ad_2]

    Source link

  • There’s a New Theory About Where Dark Matter Is Hiding

    There’s a New Theory About Where Dark Matter Is Hiding

    [ad_1]

    But there may be opportunities to indirectly spot the signatures of those gravitons.

    One strategy Vafa and his collaborators are pursuing draws on large-scale cosmological surveys that chart the distribution of galaxies and matter. In those distributions, there might be “small differences in clustering behavior,” Obied said, that would signal the presence of dark gravitons.

    When heavier dark gravitons decay, they produce a pair of lighter dark gravitons with a combined mass that is slightly less than that of their parent particle. The missing mass is converted to kinetic energy (in keeping with Einstein’s formula, E = mc2), which gives the newly created gravitons a bit of a boost—a “kick velocity” that’s estimated to be about one-ten-thousandth of the speed of light.

    These kick velocities, in turn, could affect how galaxies form. According to the standard cosmological model, galaxies start with a clump of matter whose gravitational pull attracts more matter. But gravitons with a sufficient kick velocity can escape this gravitational grip. If they do, the resulting galaxy will be slightly less massive than the standard cosmological model predicts. Astronomers can look for this difference.

    Recent observations of cosmic structure from the Kilo-Degree Survey are so far consistent with the dark dimension: An analysis of data from that survey placed an upper bound on the kick velocity that was very close to the value predicted by Obied and his coauthors. A more stringent test will come from the Euclid space telescope, which launched last July.

    Meanwhile, physicists are also planning to test the dark dimension idea in the laboratory. If gravity is leaking into a dark dimension that measures 1 micron across, one could, in principle, look for any deviations from the expected gravitational force between two objects separated by that same distance. It’s not an easy experiment to carry out, said Armin Shayeghi, a physicist at the Austrian Academy of Sciences who is conducting the test. But “there’s a simple reason for why we have to do this experiment,” he added: We won’t know how gravity behaves at such close distances until we look.

    The closest measurement to date—carried out in 2020 at the University of Washington—involved a 52-micron separation between two test bodies. The Austrian group is hoping to eventually attain the 1-micron range predicted for the dark dimension.

    While physicists find the dark dimension proposal intriguing, some are skeptical that it will work out. “Searching for extra dimensions through more precise experiments is a very interesting thing to do,” said Juan Maldacena, a physicist at the Institute for Advanced Study, “though I think that the probability of finding them is low.”

    Joseph Conlon, a physicist at Oxford, shares that skepticism: “There are many ideas that would be important if true, but are probably not. This is one of them. The conjectures it is based on are somewhat ambitious, and I think the current evidence for them is rather weak.”

    Of course, the weight of evidence can change, which is why we do experiments in the first place. The dark dimension proposal, if supported by upcoming tests, has the potential to bring us closer to understanding what dark matter is, how it is linked to both dark energy and gravity, and why gravity appears feeble compared to the other known forces. “Theorists are always trying to do this ‘tying together.’ The dark dimension is one of the most promising ideas I have heard in this direction,” Gopakumar said.

    But in an ironic twist, the one thing the dark dimension hypothesis cannot explain is why the cosmological constant is so staggeringly small—a puzzling fact that essentially initiated this whole line of inquiry. “It’s true that this program does not explain that fact,” Vafa admitted. “But what we can say, drawing from this scenario, is that if lambda is small—and you spell out the consequences of that—a whole set of amazing things could fall into place.”


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    [ad_2]

    Source link

  • Google’s Chess Experiments Reveal How to Boost the Power of AI

    Google’s Chess Experiments Reveal How to Boost the Power of AI

    [ad_1]

    His group decided to find out. They built the new, diversified version of AlphaZero, which includes multiple AI systems that trained independently and on a variety of situations. The algorithm that governs the overall system acts as a kind of virtual matchmaker, Zahavy said: one designed to identify which agent has the best chance of succeeding when it’s time to make a move. He and his colleagues also coded in a “diversity bonus”—a reward for the system whenever it pulled strategies from a large selection of choices.

    chess piece

    When the new system was set loose to play its own games, the team observed a lot of variety. The diversified AI player experimented with new, effective openings and novel—but sound—decisions about specific strategies, such as when and where to castle. In most matches, it defeated the original AlphaZero. The team also found that the diversified version could solve twice as many challenge puzzles as the original and could solve more than half of the total catalog of Penrose puzzles.

    “The idea is that instead of finding one solution, or one single policy, that would beat any player, here [it uses] the idea of creative diversity,” Cully said.

    With access to more and different played games, Zahavy said, the diversified AlphaZero had more options for sticky situations when they arose. “If you can control the kind of games that it sees, you basically control how it will generalize,” he said. Those weird intrinsic rewards (and their associated moves) could become strengths for diverse behaviors. Then the system could learn to assess and value the disparate approaches and see when they were most successful. “We found that this group of agents can actually come to an agreement on these positions.”

    And, crucially, the implications extend beyond chess.

    Real-Life Creativity

    Cully said a diversified approach can help any AI system, not just those based on reinforcement learning. He has long used diversity to train physical systems, including a six-legged robot that was allowed to explore various kinds of movement, before he intentionally “injured” it, allowing it to continue moving using some of the techniques it had developed before. “We were just trying to find solutions that were different from all previous solutions we have found so far.” Recently, he has also been collaborating with researchers to use diversity to identify promising new drug candidates and develop effective stock-trading strategies.

    “The goal is to generate a large collection of potentially thousands of different solutions, where every solution is very different from the next,” Cully said. So—just as the diversified chess player learned to do—for every type of problem, the overall system could choose the best possible solution. Zahavy’s AI system, he said, clearly shows how “searching for diverse strategies helps to think outside the box and find solutions.”

    Zahavy suspects that in order for AI systems to think creatively, researchers simply have to get them to consider more options. That hypothesis suggests a curious connection between humans and machines: Maybe intelligence is just a matter of computational power. For an AI system, maybe creativity boils down to the ability to consider and select from a large enough buffet of options. As the system gains rewards for selecting a variety of optimal strategies, this kind of creative problem-solving gets reinforced and strengthened. Ultimately, in theory, it could emulate any kind of problem-solving strategy recognized as a creative one in humans. Creativity would become a computational problem.

    Liemhetcharat noted that a diversified AI system is unlikely to completely resolve the broader generalization problem in machine learning. But it’s a step in the right direction. “It’s mitigating one of the shortcomings,” she said.

    More practically, Zahavy’s results resonate with recent efforts that show how cooperation can lead to better performance on hard tasks among humans. Most of the hits on the Billboard 100 list were written by teams of songwriters, for example, not individuals. And there’s still room for improvement. The diverse approach is currently computationally expensive, since it must consider so many more possibilities than a typical system. Zahavy is also not convinced that even the diversified AlphaZero captures the entire spectrum of possibilities.

    “I still [think] there is room to find different solutions,” he said. “It’s not clear to me that given all the data in the world, there is [only] one answer to every question.”


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    [ad_2]

    Source link

  • A Celebrated Cryptography-Breaking Algorithm Just Got an Upgrade

    A Celebrated Cryptography-Breaking Algorithm Just Got an Upgrade

    [ad_1]

    This is a job for LLL: Give it (or its brethren) a basis of a multidimensional lattice, and it’ll spit out a better one. This process is known as lattice basis reduction.

    What does this all have to do with cryptography? It turns out that the task of breaking a cryptographic system can, in some cases, be recast as another problem: finding a relatively short vector in a lattice. And sometimes, that vector can be plucked from the reduced basis generated by an LLL-style algorithm. This strategy has helped researchers topple systems that, on the surface, appear to have little to do with lattices.

    In a theoretical sense, the original LLL algorithm runs quickly: The time it takes to run doesn’t scale exponentially with the size of the input—that is, the dimension of the lattice and the size (in bits) of the numbers in the basis vectors. But it does increase as a polynomial function, and “if you actually want to do it, polynomial time is not always so feasible,” said Léo Ducas, a cryptographer at the national research institute CWI in the Netherlands.

    tile

    In practice, this means that the original LLL algorithm can’t handle inputs that are too large. “Mathematicians and cryptographers wanted the ability to do more,” said Keegan Ryan, a doctoral student at the University of California, San Diego. Researchers worked to optimize LLL-style algorithms to accommodate bigger inputs, often achieving good performance. Still, some tasks have remained stubbornly out of reach.

    The new paper, authored by Ryan and his adviser, Nadia Heninger, combines multiple strategies to improve the efficiency of its LLL-style algorithm. For one thing, the technique uses a recursive structure that breaks the task down into smaller chunks. For another, the algorithm carefully manages the precision of the numbers involved, finding a balance between speed and a correct result. The new work makes it feasible for researchers to reduce the bases of lattices with thousands of dimensions.

    Past work has followed a similar approach: A 2021 paper also combines recursion and precision management to make quick work of large lattices, but it worked only for specific kinds of lattices, and not all the ones that are important in cryptography. The new algorithm behaves well on a much broader range. “I’m really happy someone did it,” said Thomas Espitau, a cryptography researcher at the company PQShield and an author of the 2021 version. His team’s work offered a “proof of concept,” he said; the new result shows that “you can do very fast lattice reduction in a sound way.”

    The new technique has already started to prove useful. Aurel Page, a mathematician with the French national research institute Inria, said that he and his team have put an adaptation of the algorithm to work on some computational number theory tasks.

    LLL-style algorithms can also play a role in research related to lattice-based cryptography systems designed to remain secure even in a future with powerful quantum computers. They don’t pose a threat to such systems, since taking them down requires finding shorter vectors than these algorithms can achieve. But the best attacks researchers know of use an LLL-style algorithm as a “basic building block,” said Wessel van Woerden, a cryptographer at the University of Bordeaux. In practical experiments to study these attacks, that building block can slow everything down. Using the new tool, researchers may be able to expand the range of experiments they can run on the attack algorithms, offering a clearer picture of how they perform.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    [ad_2]

    Source link

  • How to Guarantee the Safety of Autonomous Vehicles

    How to Guarantee the Safety of Autonomous Vehicles

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Driverless cars and planes are no longer the stuff of the future. In the city of San Francisco alone, two taxi companies have collectively logged 8 million miles of autonomous driving through August 2023. And more than 850,000 autonomous aerial vehicles, or drones, are registered in the United States—not counting those owned by the military.

    But there are legitimate concerns about safety. For example, in a 10-month period that ended in May 2022, the National Highway Traffic Safety Administration reported nearly 400 crashes involving automobiles using some form of autonomous control. Six people died as a result of these accidents, and five were seriously injured.

    The usual way of addressing this issue—sometimes called “testing by exhaustion”—involves testing these systems until you’re satisfied they’re safe. But you can never be sure that this process will uncover all potential flaws. “People carry out tests until they’ve exhausted their resources and patience,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign. Testing alone, however, cannot provide guarantees.

    Mitra and his colleagues can. His team has managed to prove the safety of lane-tracking capabilities for cars and landing systems for autonomous aircraft. Their strategy is now being used to help land drones on aircraft carriers, and Boeing plans to test it on an experimental aircraft this year. “Their method of providing end-to-end safety guarantees is very important,” said Corina Pasareanu, a research scientist at Carnegie Mellon University and NASA’s Ames Research Center.

    Their work involves guaranteeing the results of the machine-learning algorithms that are used to inform autonomous vehicles. At a high level, many autonomous vehicles have two components: a perceptual system and a control system. The perception system tells you, for instance, how far your car is from the center of the lane, or what direction a plane is heading in and what its angle is with respect to the horizon. The system operates by feeding raw data from cameras and other sensory tools to machine-learning algorithms based on neural networks, which re-create the environment outside the vehicle.

    These assessments are then sent to a separate system, the control module, which decides what to do. If there’s an upcoming obstacle, for instance, it decides whether to apply the brakes or steer around it. According to Luca Carlone, an associate professor at the Massachusetts Institute of Technology, while the control module relies on well-established technology, “it is making decisions based on the perception results, and there’s no guarantee that those results are correct.”

    To provide a safety guarantee, Mitra’s team worked on ensuring the reliability of the vehicle’s perception system. They first assumed that it’s possible to guarantee safety when a perfect rendering of the outside world is available. They then determined how much error the perception system introduces into its re-creation of the vehicle’s surroundings.

    The key to this strategy is to quantify the uncertainties involved, known as the error band—or the “known unknowns,” as Mitra put it. That calculation comes from what he and his team call a perception contract. In software engineering, a contract is a commitment that, for a given input to a computer program, the output will fall within a specified range. Figuring out this range isn’t easy. How accurate are the car’s sensors? How much fog, rain, or solar glare can a drone tolerate? But if you can keep the vehicle within a specified range of uncertainty, and if the determination of that range is sufficiently accurate, Mitra’s team proved that you can ensure its safety.

    [ad_2]

    Source link