Tag: math

  • Mathematicians Just Debunked the ‘Bunkbed Conjecture’

    Mathematicians Just Debunked the ‘Bunkbed Conjecture’

    [ad_1]

    Their result shows the importance of not taking anything for granted, said Noga Alon, a mathematician at Princeton. “We have to be suspicious, even about things that intuitively look very likely to be true.”

    Gladkov, Pak, and Zimin found many small-graph examples that satisfied the conjecture, but in the end, those did not reflect the more complicated, less intuitive graphs they could build when given enough vertices and edges.

    As Hollom put it, “Do we actually understand all this stuff as well as we think we do?”

    Mathematicians still believe the physics statement about connected locations within solids that inspired the bunkbed conjecture. But they’ll need to find a different way to prove it.

    In the meantime, Pak says, it’s clear that mathematicians need to engage in a more active discussion about the nature of mathematical proof. He and his colleagues ultimately didn’t have to rely on controversial computational methods; they were able to disprove the conjecture with total certainty. But as computer- and AI-based lines of attack become more common in mathematics research, some mathematicians are debating whether the field’s norms will eventually have to change. “It’s a philosophical question,” Alon said. “How do we view proofs that are only true with high probability?”

    “I think the future of mathematics will be to accept probabilistic proofs like this,” said Doron Zeilberger, a mathematician at Rutgers University who is known for crediting his computer as a coauthor on many of his papers. “In 50 years, or maybe less, people will have a new attitude.”

    Others wonder if such a future threatens something vital. “Maybe a probabilistic proof would give you less understanding or intuition of what’s really going on,” Alon said.

    Pak has suggested that separate journals be created for results of this kind as they become more common, so that their value isn’t lost to mathematicians. But his main goal is to open the conversation. “There’s no correct answer,” he said. “I want the community to meditate on whether the next result of this kind will count.” As technology continues to infiltrate and transform mathematics, the question will only become more pressing.


    Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

    [ad_2]

    Source link

  • Scientists Establish the Best Algorithm for Traversing a Map

    Scientists Establish the Best Algorithm for Traversing a Map

    [ad_1]

    “It’s a great algorithm,” said Erik Demaine, a computer scientist at the Massachusetts Institute of Technology. “It’s very fast, simple, and easy to implement.”

    To put this procedure into practice, you’d need to decide on a system for organizing your notes—a data structure, in the lingo of computer science. That may sound like a minor technical detail, but time spent searching through your notes whenever you need to edit or remove an entry can have a big effect on the overall runtime of the algorithm.

    Dijkstra’s paper used a simple data structure that left room for improvement. In the following decades, researchers developed better ones, affectionately dubbed “heaps,” in which certain items are easier to find than others. They take advantage of the fact that Dijkstra’s algorithm only ever needs to remove the entry for the closest remaining vertex. “A heap is basically a data structure that allows you to do this very quickly,” said Václav Rozhoň, a researcher at the Institute for Computer Science, Artificial Intelligence and Technology (INSAIT) in Sofia, Bulgaria.

    In 1984, two computer scientists developed a clever heap design that enabled Dijkstra’s algorithm to reach a theoretical limit, or “lower bound,” on the time required to solve the single-source shortest-paths problem. In one specific sense, this version of Dijkstra’s algorithm is the best possible. That was the last word on the standard version of the problem for nearly 40 years. Things only changed when a few researchers took a closer look at what it means to be “best.”

    Best Behavior

    Researchers typically compare algorithms by studying how they fare in worst-case scenarios. Imagine the world’s most confusing street grid, then add some especially perplexing traffic patterns. If you insist on finding the fastest routes in these extreme circumstances, the 1984 version of Dijkstra’s algorithm is provably unbeatable.

    But hopefully, your city doesn’t have the world’s worst street grid. And so you may ask: Is there an algorithm that’s unbeatable on every road network? The first step to answering this question is to make the conservative assumption that each network has worst-case traffic patterns. Then you want your algorithm to find the fastest paths through any possible graph layout, assuming the worst possible weights. Researchers call this condition “universal optimality.” If you had a universally optimal algorithm for the simpler problem of just getting from one point on a graph to another, it could help you beat rush hour traffic in every city in the world.

    [ad_2]

    Source link

  • Students Find New Evidence of the Impossibility of Complete Disorder

    Students Find New Evidence of the Impossibility of Complete Disorder

    [ad_1]

    A new mathematic proof marks the first progress in decades on a problem about how order emerges.

    [ad_2]

    Source link

  • Generative AI Transformed English Homework. Math Is Next

    Generative AI Transformed English Homework. Math Is Next

    [ad_1]

    ChatGPT has already wreaked havoc on classrooms and changed how teachers approach writing homework, since OpenAI publicly launched the generative AI chatbot in late 2022. School administrators rushed to try to detect AI-generated essays, and in turn, students scrambled to find out how to cloak their synthetic compositions. But by focusing on writing assignments, educators let another seismic shift take place in the periphery: students using AI more often to complete math homework too.

    Right now, high schoolers and college students around the country are experimenting with free smartphone apps that help complete their math homework using generative AI. One of the most popular options on campus right now is the Gauth app, with millions of downloads. It’s owned by ByteDance, which is also TikTok’s parent company.

    The Gauth app first launched in 2019 with a primary focus on mathematics, but soon expanded to other subjects as well, like chemistry and physics. It’s grown in relevance, and neared the top of smartphone download lists earlier this year for the education category. Students seem to love it. With hundreds of thousands of primarily positive reviews, Gauth has a favorable 4.8 star rating in the Apple App Store and Google Play Store.

    All students have to do after downloading the app is point their smartphone at a homework problem, printed or handwritten, and then make sure any relevant information is inside of the image crop. Then Gauth’s AI model generates a step-by-step guide, often with the correct answer.

    From our testing on high-school-level algebra and geometry homework samples, Gauth’s AI tool didn’t deliver A+ results and particularly struggled with some graphing questions. It performed well enough to get around a low B grade or a high C average on the homework we fed it. Not perfect, but also likely good enough to satisfy bored students who’d rather spend their time after school doing literally anything else.

    The app struggled more on higher levels of math, like Calculus 2 problems, so students further along in their educational journey may find less utility in this current generation of AI homework-solving apps.

    Yes, generative AI tools, with a foundation in natural language processing, are known for failing to generate accurate answers when presented with complex math equations. But researchers are focused on improving AI’s abilities in this sector, and an entry-level high school math class is likely well within the reach of current AI homework apps. Will has even written about how researchers at Google DeepMind are ecstatic about recent results from testing a math-focused large language model, called AlphaProof, on problems shown at this year’s International Math Olympiad.

    To be fair, Gauth positions itself as an AI study company that’s there to “ace your homework” and help with difficult problems, rather than a cheating aid. The company even goes so far as to include an “Honor Code” on its website dictating proper usage. “Resist the temptation to use Gauth in ways that go against your values or school’s expectations,” reads the company’s website. So basically, Gauth implicitly acknowledges impulsive teenagers may use the app for much more than the occasional stumper, and wants them to pinkie promise that they’ll behave.

    Prior to publication, a spokesperson for ByteDance did not answer a list of questions about the Gauth app when contacted by WIRED over email.

    [ad_2]

    Source link

  • ‘Gem’ of a Proof Breaks 80-Year-Old Record, Offers New Insights Into Prime Numbers

    ‘Gem’ of a Proof Breaks 80-Year-Old Record, Offers New Insights Into Prime Numbers

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    Sometimes mathematicians try to tackle a problem head on, and sometimes they come at it sideways. That’s especially true when the mathematical stakes are high, as with the Riemann hypothesis, whose solution comes with a $1 million reward from the Clay Mathematics Institute. Its proof would give mathematicians much deeper certainty about how prime numbers are distributed, while also implying a host of other consequences—making it arguably the most important open question in math.

    Mathematicians have no idea how to prove the Riemann hypothesis. But they can still get useful results just by showing that the number of possible exceptions to it is limited. “In many cases, that can be as good as the Riemann hypothesis itself,” said James Maynard of the University of Oxford. “We can get similar results about prime numbers from this.”

    In a breakthrough result posted online in May, Maynard and Larry Guth of the Massachusetts Institute of Technology established a new cap on the number of exceptions of a particular type, finally beating a record that had been set more than 80 years earlier. “It’s a sensational result,” said Henryk Iwaniec of Rutgers University. “It’s very, very, very hard. But it’s a gem.”

    The new proof automatically leads to better approximations of how many primes exist in short intervals on the number line, and stands to offer many other insights into how primes behave.

    A Careful Sidestep

    The Riemann hypothesis is a statement about a central formula in number theory called the Riemann zeta function. The zeta (ζ) function is a generalization of a straightforward sum:

    1 + 1/2 + 1/3 + 1/4 + 1/5 + ⋯.

    This series will become arbitrarily large as more and more terms are added to it—mathematicians say that it diverges. But if instead you were to sum up

    1 + 1/22 + 1/32 + 1/42 + 1/52 + ⋯ = 1 + 1/4 + 1/9+ 1/16 + 1/25 +⋯

    you would get π2/6, or about 1.64. Riemann’s surprisingly powerful idea was to turn a series like this into a function, like so:

    ζ(s) = 1 + 1/2s + 1/3s + 1/4s + 1/5s + ⋯.

    So ζ(1) is infinite, but ζ(2) = π2/6.

    Things get really interesting when you let s be a complex number, which has two parts: a “real” part, which is an everyday number, and an “imaginary” part, which is an everyday number multiplied by the square root of −1 (or i, as mathematicians write it). Complex numbers can be plotted on a plane, with the real part on the x-axis and the imaginary part on the y-axis. Here, for example, is 3 + 4i.

    Image may contain Chart Plot and Text

    Graph: Mark Belan for Quanta Magazine

    [ad_2]

    Source link

  • An Old Abstract Field of Math Is Unlocking the Deep Complexity of Spacecraft Orbits

    An Old Abstract Field of Math Is Unlocking the Deep Complexity of Spacecraft Orbits

    [ad_1]

    The original version of this story appeared in Quanta Magazine.

    In October, a Falcon Heavy rocket is scheduled to launch from Cape Canaveral in Florida, carrying NASA’s Europa Clipper mission. The $5 billion mission is designed to find out if Europa, Jupiter’s fourth-largest moon, can support life. But because Europa is constantly bombarded by intense radiation created by Jupiter’s magnetic field, the Clipper spacecraft can’t orbit the moon itself. Instead, it will slide into an eccentric orbit around Jupiter and gather data by repeatedly swinging by Europa—53 times in total—before retreating from the worst of the radiation. Every time the spacecraft rounds Jupiter, its path will be slightly different, ensuring that it can take pictures and gather data from Europa’s poles to its equator.

    To plan convoluted tours like this one, trajectory planners use computer models that meticulously calculate the trajectory one step at a time. The planning takes hundreds of mission requirements into account, and it’s bolstered by decades of mathematical research into orbits and how to join them into complicated tours. Mathematicians are now developing tools which they hope can be used to create a more systematic understanding of how orbits relate to one another.

    “What we have is the previous computations that we’ve done, that guide us as we do the current computations. But it’s not a complete picture of all the options that we have,” said Daniel Scheeres, an aerospace engineer at the University of Colorado, Boulder.

    “I think that was my biggest frustration when I was a student,” said Dayung Koh, an engineer at NASA’s Jet Propulsion Laboratory. “I know these orbits are there, but I don’t know why.” Given the expense and complexity of missions to the moons of Jupiter and Saturn, not knowing why orbits are where they are is a problem. What if there is a completely different orbit that could get the job done with fewer resources? As Koh said: “Did I find them all? Are there more? I can’t tell that.”

    After getting her doctorate from the University of Southern California in 2016, Koh grew interested in how orbits can be cataloged into families. Jovian orbits that are far from Europa form such a family; so do orbits close to Europa. But other families are less obvious. For instance, for any two bodies, like Jupiter and Europa, there is an intermediate point where the two bodies’ gravitational effects balance to create stable points. Spacecraft can orbit this point, even though there is nothing at the center of the orbit. These orbits form a family called Lyapunov orbits. Add a little energy to such an orbit by firing a spacecraft engine, and at first you’ll stay in the same family. But add enough, and you’ll cross over into another family—say, one that includes Jupiter inside its orbits. Some orbit families might require less fuel than others, remain in sunlight at all times, or have other useful features.

    Image may contain Face Happy Head Person Smile Dimples and Adult

    Dayung Koh, an engineer at NASA’s Jet Propulsion Laboratory, is trying to come to a systematic understanding of how orbits in a planetary system relate to one another.

    PHOTO: Courtesy of Dayung Koh

    [ad_2]

    Source link

  • Never-Repeating Patterns of Tiles Can Safeguard Quantum Information

    Never-Repeating Patterns of Tiles Can Safeguard Quantum Information

    [ad_1]

    This extreme fragility might make quantum computing sound hopeless. But in 1995, the applied mathematician Peter Shor discovered a clever way to store quantum information. His encoding had two key properties. First, it could tolerate errors that only affected individual qubits. Second, it came with a procedure for correcting errors as they occurred, preventing them from piling up and derailing a computation. Shor’s discovery was the first example of a quantum error-correcting code, and its two key properties are the defining features of all such codes.

    The first property stems from a simple principle: Secret information is less vulnerable when it’s divided up. Spy networks employ a similar strategy. Each spy knows very little about the network as a whole, so the organization remains safe even if any individual is captured. But quantum error-correcting codes take this logic to the extreme. In a quantum spy network, no single spy would know anything at all, yet together they’d know a lot.

    Each quantum error-correcting code is a specific recipe for distributing quantum information across many qubits in a collective superposition state. This procedure effectively transforms a cluster of physical qubits into a single virtual qubit. Repeat the process many times with a large array of qubits, and you’ll get many virtual qubits that you can use to perform computations.

    The physical qubits that make up each virtual qubit are like those oblivious quantum spies. Measure any one of them and you’ll learn nothing about the state of the virtual qubit it’s a part of—a property called local indistinguishability. Since each physical qubit encodes no information, errors in single qubits won’t ruin a computation. The information that matters is somehow everywhere, yet nowhere in particular.

    “You can’t pin it down to any individual qubit,” Cubitt said.

    All quantum error-correcting codes can absorb at least one error without any effect on the encoded information, but they will all eventually succumb as errors accumulate. That’s where the second property of quantum error-correcting codes kicks in—the actual error correction. This is closely related to local indistinguishability: Because errors in individual qubits don’t destroy any information, it’s always possible to reverse any error using established procedures specific to each code.

    Taken for a Ride

    Zhi Li, a postdoc at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, was well versed in the theory of quantum error correction. But the subject was far from his mind when he struck up a conversation with his colleague Latham Boyle. It was the fall of 2022, and the two physicists were on an evening shuttle from Waterloo to Toronto. Boyle, an expert in aperiodic tilings who lived in Toronto at the time and is now at the University of Edinburgh, was a familiar face on those shuttle rides, which often got stuck in heavy traffic.

    “Normally they could be very miserable,” Boyle said. “This was like the greatest one of all time.”

    Before that fateful evening, Li and Boyle knew of each other’s work, but their research areas didn’t directly overlap, and they’d never had a one-on-one conversation. But like countless researchers in unrelated fields, Li was curious about aperiodic tilings. “It’s very hard to be not interested,” he said.

    [ad_2]

    Source link