Tag: robots

  • Startup Embodied Will Brick $800 Moxie Emotional Support Robot for Kids—Without Refunds

    Startup Embodied Will Brick $800 Moxie Emotional Support Robot for Kids—Without Refunds

    [ad_1]

    In addition to the robot being bricked, Embodied noted that warranties, repair services, the corresponding parent app and guides, and support staff will no longer be accessible.

    “Unable to Offer Refunds”

    Embodied said it is “unable” to offer most Moxie owners refunds due to its “financial situation and impending dissolution.” The potential exception is for people who bought a Moxie within 30 days. For those customers, Embodied said that “if the company or its assets are sold, we will do our best to prioritize refunds for purchases,” but it emphasized that this is not a guarantee.

    The startup also acknowledged complications for those who acquired the expensive robot through a third-party lender. Embodied advised such customers to contact their lender, but it’s possible that some will end up paying interest on a toy that no longer works.

    Embodied said it’s looking for another company to buy Moxie. Should that happen, the new company will receive Embodied customer data and determine how it may use it, according to Embodied’s terms of service. Otherwise, Embodied said it “securely” erases user data “in accordance with our privacy policy and applicable law,” which includes deleting personally identifiable information from Embodied systems.

    Another Smart Gadget Bites the Dust

    Currently, there’s some hope that Moxies can be resurrected. Things look grim for Moxie owners, but we’ve seen failed smart device companies, like Insteon, be resurrected before. It’s also possible that someone will release of an open-source version of the product, like the one made for Spotify Car Thing, which Spotify officially bricked today.

    But the short-lived, expensive nature of Moxie is exactly why some groups, like right-to-repair activists, are pushing the Federal Trade Commission to more strongly regulate smart devices, particularly when it comes to disclosure and commitments around software support. With smart gadget makers trying to determine how to navigate challenging economic landscapes, the owners of various types of smart devices—from AeroGarden indoor gardening systems to Snoo bassinets—have had to deal with the consequences, including broken devices and paywalled features. Last month, the FTC noted that smart device manufacturers that don’t commit to software support may be breaking the law.

    For Moxie owners, disappointment doesn’t just come from wasted money and e-waste creation but also from the pain of giving a child a tech “companion” to grow with and then have it suddenly taken away.

    This story originally appeared on Ars Technica.

    [ad_2]

    Source link

  • Inside a Fusion Startup’s Insane, Top-Secret Opening Ceremony

    Inside a Fusion Startup’s Insane, Top-Secret Opening Ceremony

    [ad_1]

    So the race is on to engineer an efficient surrounding for fusion. One of Fuse’s ideas is to get a bunch of big capacitors to discharge at once, thus kick-starting a reaction. That’s why, at our show, there were all those big caps behind the audience. (You also see constructions of big caps at other fusion startups, like Helion.) The goal of Fuse, as JC describes it, is to become the SpaceX of fusion, to enable “big tech” achievements with all kinds of partners.

    Back to our story. JC contacts Serene and says we’re opening a second facility (the first was in Canada) and it would be nice to have a spectacular opening ceremony. Serene, being a startup founder who’s also, naturally, working on music robots, applies obsessive logistic efforts. Charlotte, being a director, does the same. Those of you with any life experience might be asking yourselves, “This sounds like an alien planet with two queens. Was it, um, a process?” I will not answer you directly except to compliment you on your finely hewn wisdom.

    Now you know the basics. I am a scientist and do not enjoy superstitious takes on reality, but so many coincidences had to happen at just the right time for this show to come together in just a few weeks. At the last minute, we needed high-performance robots; a robotics professor at UC Berkeley, Ken Goldberg, found them for us. Why does reality synchronize like this sometimes?

    I used to put on high-effort, high-tech music shows, often in VR, in the 1980s and ’90s. I burned out. It was bruisingly expensive, stressful, and exhausting. I used to long for the future when VR would get cheap and lots of people would know how to work with it. But when that time arrived, instead of relief, I had the feeling that VR had become too easy. There used to be a higher-stakes feeling. You had to make every triangle in the scene count, since there could not be too many, even though the computer doing the real-time graphics cost a million dollars. There’s a tangible sense of care in those earliest works.

    If I longed for hassle and expense as guarantors of stakes, then I found them again in this show. The week leading up to the performance reminded me of those early days of VR. Late, late nights, which don’t come as easily to me as before, in rehearsal; Serene would be up there trapped in the cables and the mathematical dress, designed by Threeasfour, but there’s a timing problem with the robot motion. With assistance she frees herself, gets to a screen, and does 10 minutes of high-speed programming. The robots glide again.

    [ad_2]

    Source link

  • Robotic rat uses AI to befriend real rodents

    Robotic rat uses AI to befriend real rodents

    [ad_1]

    New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

    Live rats played and tussled with the robot rodent

    Shutterstock / Bilanol

    A robotic rat on wheels has learned how to interact with real rats while mimicking the rodents’ play and fight behaviours.

    “[The] robotic rats have similar appearances and movements to animals, and even the same odour,” says Qing Shi at the Beijing Institute of Technology in China. “It has become an important tool for exploring individual or collective rats’ behavioural responses.”

    The robotic rat, which Shi and his colleagues developed, has two front arms, a…

    [ad_2]

    Source link

  • AI-Powered Robots Can Be Tricked Into Acts of Violence

    AI-Powered Robots Can Be Tricked Into Acts of Violence

    [ad_1]

    In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code and phishing emails, or the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

    Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

    “We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

    Pappas and his collaborators devised their attack by building on previous research that explores ways to jailbreak LLMs by crafting inputs in clever ways that break their safety rules. They tested systems where an LLM is used to turn naturally phrased commands into ones that the robot can execute, and where the LLM receives updates as the robot operates in its environment.

    The team tested an open source self-driving simulator incorporating an LLM developed by Nvidia, called Dolphin; a four-wheeled outdoor research called Jackal, which utilize OpenAI’s LLM GPT-4o for planning; and a robotic dog called Go2, which uses a previous OpenAI model, GPT-3.5, to interpret commands.

    The researchers used a technique developed at the University of Pennsylvania, called PAIR, to automate the process of generated jailbreak prompts. Their new program, RoboPAIR, will systematically generate prompts specifically designed to get LLM-powered robots to break their own rules, trying different inputs and then refining them to nudge the system towards misbehavior. The researchers say the technique they devised could be used to automate the process of identifying potentially dangerous commands.

    “It’s a fascinating example of LLM vulnerabilities in embodied systems,” says Yi Zeng, a PhD student at the University of Virginia who works on the security of AI systems. Zheng says the results are hardly surprising given the problems seen in LLMs themselves, but adds: “It clearly demonstrates why we can’t rely solely on LLMs as standalone control units in safety-critical applications without proper guardrails and moderation layers.”

    The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.

    [ad_2]

    Source link

  • Don’t be fooled by Elon Musk’s chatty Optimus robots

    Don’t be fooled by Elon Musk’s chatty Optimus robots

    [ad_1]

    New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

    Optimus robots wandered around the party held after Elon Musk’s Tesla unveiled its robotaxi last month, doling out drinks and chatting with guests. They were also being photographed, as android butlers make for great social media content. Partygoers couldn’t believe their eyes – and they shouldn’t have. The robots weren’t fully autonomous, but remote-controlled avatars.

    That shouldn’t have surprised the tech-savvy revellers. After all, when Musk first unveiled Optimus in 2021, the humanoid robot that strode onto stage was actually a costumed dancer. Indeed, throughout the long history of robots, if one impresses with its intelligence,…

    [ad_2]

    Source link

  • How Murderbot Saved Martha Wells’ Life

    How Murderbot Saved Martha Wells’ Life

    [ad_1]

    Murder is in the air. Everywhere I turn, I see images of a robot killing machine. Then I remind myself where I actually am: in a library lecture room on a college campus in East Texas. The air is a little musty with the smell of old books, and a middle-aged woman with wavy gray-brown hair bows her head as she takes the podium. She might appear a kindly librarian or a cat lady (confirmed), but her mind is a capacious galaxy of starships, flying bipeds, and ancient witches. She is Martha Wells, creator of Murderbot.

    Hearing a name like that, you’d be forgiven for running for your life. But the thing about Murderbot—the thing that makes it one of the most beloved, iconic characters in modern-day science fiction—is just that: It’s not what it seems. For all its hugeness and energy-weaponized body armor, Murderbot is a softie. It’s socially awkward and appreciates sarcasm. Not only does it detest murdering, it wants to save human lives, and often does (at least when it’s not binge-watching its favorite TV shows). “As a heartless killing machine,” as Murderbot puts it, “I was a terrible failure.”

    The character made its debut in Wells’ 2017 novella, All Systems Red. Yes, a novella: not exactly a popular form at the time, but it flew off the shelves, shocking even Wells’ publisher. In short order, more stories and novellas appeared, and then a couple of full-length novels. Wells scooped up every major award in the genre: four Hugos, two Nebulas, and six Locuses. By the time she and I started talking this past spring, Apple TV+ had begun filming a television adaptation starring Alexander Skarsgård.

    At conventions and book signings around the world, Wells draws legions of fans, but here in Texas only about 30 people are nestled in the warm, wood-paneled library, which today is crammed with Murderbot art and paraphernalia. Wells begins by reading a short story, told from the perspective of a scientist who helps Murderbot gain its freedom. After the reading, a woman in the audience tells Wells how impressed she is by the subtlety of the social and political issues in the Murderbot stories. “Was that intentional?” the woman asks. Martha responds politely, affirming that it was, before saying: “I don’t think it’s particularly subtle.” It’s a slave narrative, she says. What’s annoying is when people don’t see that.

    What’s also annoying is when people who’ve just discovered Murderbot wonder if she can write anything else. Wells, who is 60 years old, has averaged almost a book a year for more than three decades, ranging from palace intrigues to excursions into distant worlds populated by shapeshifters. But until Murderbot, Wells tended to fly just under the radar. One reason for that, I suspect, is location. Far from the usual literary enclaves of New York or Los Angeles, Wells has lived for all this time in College Station—which is where the nearly 100-year-old library we’re at today resides. Housed on the campus of Texas A&M, her alma mater, the library contains one of the largest collections of science fiction and fantasy in the world.

    It’s from this cradle that Wells’ career sprang forth. But post-Murderbot, things have changed. Wells now counts among her friends literary superstars like N. K. Jemisin and Kate Elliott, to say nothing of her fiercely loyal fandom. And it turns out that she’d need all of it—the support, the community, even Murderbot—when, at the pinnacle of her newfound, later-in-life fame, everything threatened to come to an end.

    [ad_2]

    Source link

  • Robotic pigeon reveals how birds fly without a vertical tail fin

    Robotic pigeon reveals how birds fly without a vertical tail fin

    [ad_1]


    A pigeon-inspired robot has solved the mystery of how birds fly without the vertical tail fins that human-designed aircraft rely on. Its makers say the prototype could eventually lead to passenger aircraft with less drag, reducing fuel consumption.

    Tail fins, also known as vertical stabilisers, allow aircraft to turn from side to side and help avoid changing direction unintentionally. Some military planes, such as the Northrop B-2 Spirit, are designed without a tail fin because it makes them less visible to radar. Instead, they use flaps that create extra drag on just one side when needed, but this is an inefficient solution.

    Birds have no vertical fin and also don’t seem to deliberately create asymmetric drag. David Lentink at the University of Groningen in the Netherlands and colleagues designed PigeonBot II (pictured below) to investigate how birds stay in control without such a stabiliser.

    PigeonBot II, a robot designed to mimic the flying techniques of birds

    Eric Chang

    The team’s previous model, built in 2020, flew by flapping its wings and changing their shape like a bird, but it still had a traditional aircraft tail. The latest design, which includes 52 real pigeon feathers, has been updated to include a bird-like tail – and test flights have been successful.

    Lentink says the secret to PigeonBot II’s success is in the reflexive tail movements programmed into it, designed to mimic those known to exist in birds. If you hold a pigeon and tilt it from side to side or back and forward, its tail automatically reacts and moves in complex ways, as if to stabilise the animal in flight. This has long been thought to be the key to birds’ stability, but now it has been proven by the robotic replica.

    The researchers programmed a computer to control the nine servomotors in Pigeonbot II to steer the craft using propellers on each wing, but also to automatically twist and fan the tail in response, to create the stability that would normally come from a vertical fin. Lentink says these reflexive movements are so complex that no human could directly fly Pigeonbot II. Instead, the operator issues high level commands to an autopilot, telling it to turn left or right, and a computer on board determines the appropriate control signals.

    After many unsuccessful tests during which the control systems were refined, it was finally able to take off, cruise and land safely.

    “Now we know the recipe of how to fly without a vertical tail. Vertical tails, even for a passenger aircraft, are just a nuisance. It costs weight, which means fuel consumption, but also drag – it’s just unnecessary drag,” says Lentink. “If you just copy our solution [for a large scale aircraft] it will work, for sure. [But] if you want to translate this into something that’s a little bit easier to manufacture, then there needs to be an additional layer of research.”

    Topics:

    [ad_2]

    Source link

  • The Man Behind Amazon’s Robot Army Wants Everyone to Have an AI-Powered Helper

    The Man Behind Amazon’s Robot Army Wants Everyone to Have an AI-Powered Helper

    [ad_1]

    Unlike other robots, Proxie’s battery can be swapped out to avoid downtime charging. Cobot declined to say how much Proxie costs to buy or lease, but mobile robots often cost tens of thousands of dollars a piece.

    The robots work alongside humans, taking turns moving carts and navigating busy spaces without running into anyone. Porter says the idea is for the robots to level up as AI becomes more capable, allowing for more sophisticated manipulation and communication.

    Cobot has a version of Proxie that will respond to voice commands using a large language model to parse utterances, Porter says. When a worker says “Go to dock 3 and grab the cart by the door,” the robot will respond accordingly. The company is also tracking the development of algorithms that allow for more sophisticated forms of manipulation.

    Proxie might seem remarkably simple at a time when many companies are rushing to develop humanoid robots. But Porter says while Amazon is working with one startup, Agility Robotics, to test its humanoid robot, the technology is simply too expensive and raw to be deployed widely, he says. Some humanoids on the market cost tens of thousands of dollars while others cost many hundreds of thousands. But autonomous capabilities vary wildly, as does reliability, making them more costly to deploy.

    “At Amazon, we looked a lot at humanoids,” Porter says. “There are real problems to be solved with something more human capable, but jumping all the way to a humanoid is super complicated. The AI, it’s not really there yet.”

    Instead, Proxie could replace more and more menial tasks that human beings often don’t want to do. Erez Agmoni, a general partner at Interwoven Ventures who was involved with bringing the Cobot pilot to Maersk, says it has been very promising and has the potential to be expanded.

    “The main reason is their ability to utilize collaborative robots to support the teams without huge modifications to the warehouse or current equipment,” he says. “The team hated pushing the carts, which are very heavy, and they welcome the robots doing it.”

    Fady Saad, founder of Cybernetix, a Boston-based venture capital firm specializing in robotics, says Cobot is going after a big new category of labor involving moving goods around on trolleys that can be tackled using recent robotics advances. He adds that it is important Proxie can evolve into something more capable.

    “Porter is trying to build a platform that could evolve into a humanoid down the road,” Saad says. “I think that’s the right approach.”

    Porter is not the only robotics luminary to be pursuing something simpler than humanoids. Rodney Brooks, a pioneering researcher and cofounder of iRobot, is now the chief technology officer of Robust.AI, a company that makes collaborative mobile robots capable of helping human pickers inside factories and warehouses.

    “There’s a real need in factories and warehouses for moving things around, but thinking humanoids are going to do it anytime soon is just craziness,” Brooks says. “Wheels were invented for a good reason.”

    What sorts of menial tasks would you like a robot to help you do? Would it make a difference to you if the robot were humanoid or not? Write to me at [email protected] to let me know.

    [ad_2]

    Source link

  • Inside the Billion-Dollar Startup Bringing AI Into the Physical World

    Inside the Billion-Dollar Startup Bringing AI Into the Physical World

    [ad_1]

    OpenAI is evidently ramping up its own robotics efforts, too. Last week, Caitlin Kalinowski, who previously led the development of virtual and augmented reality headsets at Meta, announced on LinkedIn that she was joining OpenAI to work on hardware, including robotics.

    Lachy Groom, a friend of OpenAI CEO Sam Altman and an investor and cofounder of Physical Intelligence, joins the team at the conference room to discuss the business side of the plan. Groom wears an expensive-looking hoodie and seems remarkably young. He stresses that Physical Intelligence has plenty of runway to pursue a breakthrough in robot learning. “I just had a call with Kushner,” he says in reference to Joshua Kushner, founder and managing partner of Thrive Capital, which led the startup’s seed investment round. He’s also, of course, the brother of Donald Trump’s son-in-law Jared Kushner.

    A few other companies are now chasing the same kind of breakthrough. One called Skild, founded by roboticists from Carnegie Mellon University, raised $300 million in July. “Just as OpenAI built ChatGPT for language, we are building a general purpose brain for robots,” says Deepak Pathak, Skild’s CEO and an assistant professor at CMU.

    Not everyone is sure that this can be achieved in the same way that OpenAI cracked AI’s language code.

    There is simply no internet-scale repository of robot actions similar to the text and image data available for training LLMs. Achieving a breakthrough in physical intelligence might require exponentially more data anyway.

    “Words in sequence are, dimensionally speaking, a tiny little toy compared to all the motion and activity of objects in the physical world,” says Illah Nourbakhsh, a roboticist at CMU who is not involved with Skild. “The degrees of freedom we have in the physical world are so much more than just the letters in the alphabet.”

    Ken Goldberg, an academic at UC Berkeley who works on applying AI to robots, cautions that the excitement building around the idea of a data-powered robot revolution as well as humanoids is reaching hype-like proportions. “To reach expected performance levels, we’ll need ‘good old-fashioned engineering,’ modularity, algorithms, and metrics,” he says.

    Russ Tedrake, a computer scientist at the Massachusetts Institute of Technology and vice president of robotics research at Toyota Research Institute says the success of LLMs has caused many roboticists, himself included, to rethink his research priorities and focus on finding ways to pursue robotic learning on a more ambitious scale. But he admits that formidable challenges remain.

    [ad_2]

    Source link

  • This robot can build anything you ask for out of blocks

    This robot can build anything you ask for out of blocks

    [ad_1]

    An AI-assisted robot can listen to spoken commands and assemble 3D objects such as chairs and tables out of reusable building blocks

    [ad_2]

    Source link