Tag: robotics

  • Fast forward to the fluffy revolution, when robot pets win our hearts

    Fast forward to the fluffy revolution, when robot pets win our hearts

    [ad_1]

    Portrait of robot dog sitting on wooden floor, 3d rendering

    “The bodies of the first robot pets were based on cats and dogs, but dragons and Ewoks later became popular”

    Westend61 / Anna Huber / Getty Images

    There is no doubting the value of companion animals, either during our evolutionary history, when dogs especially helped with hunting and guarding, or in recent times, when eroding social connections meant people leaned on animals for the emotional bond they previously got from humans. But the carbon pawprint of pets was unduly heavy.

    By the 2020s, there were more than a billion dogs in the world, causing untold ecological damage. Cats and…

    [ad_2]

    Source link

  • The Godmother of AI Wants Everyone to Be a World Builder

    The Godmother of AI Wants Everyone to Be a World Builder

    [ad_1]

    According to market-fixated tech pundits and professional skeptics, the artificial intelligence bubble has popped, and winter’s back. Fei-Fei Li isn’t buying that. In fact, Li—who earned the sobriquet the “godmother of AI”—is betting on the contrary. She’s on a part-time leave from Stanford University to cofound a company called World Labs. While current generative AI is language-based, she sees a frontier where systems construct complete worlds with the physics, logic, and rich detail of our physical reality. It’s an ambitious goal, and despite the dreary nabobs who say progress in AI has hit a grim plateau, World Labs is on the funding fast track. The startup is perhaps a year away from having a product—and it’s not clear at all how well it will work when and if it does arrive—but investors have pitched in $230 million and are reportedly valuing the nascent startup at a billion dollars.

    Roughly a decade ago, Li helped AI turn a corner by creating ImageNet, a bespoke database of digital images that allowed neural nets to get significantly smarter. She feels that today’s deep-learning models need a similar boost if AI is to create actual worlds, whether they’re realistic simulations or totally imagined universes. Future George R.R. Martins might compose their dreamed-up worlds as prompts instead of prose, which you might then render and wander around in. “The physical world for computers is seen through cameras, and the computer brain behind the cameras,” Li says. “Turning that vision into reasoning, generation, and eventual interaction involves understanding the physical structure, the physical dynamics of the physical world. And that technology is called spatial intelligence.” World Labs calls itself a spatial intelligence company, and its fate will help determine whether that term becomes a revolution or a punch line.

    Li has been obsessing over spatial intelligence for years. While everyone was going gaga over ChatGPT, she and a former student, Justin Johnson, were excitedly gabbling in phone calls about AI’s next iteration. “The next decade will be about generating new content that takes computer vision, deep learning, and AI out of the internet world, and gets them embedded in space and time,” says Johnson, who is now an assistant professor at the University of Michigan.

    Li decided to start a company early in 2023, after a dinner with Martin Casado, a pioneer in virtual networking who is now a partner at Andreessen Horowitz. That’s the VC firm notorious for its near-messianic embrace of AI. Casado sees AI as being on a similar path as computer games, which started with text, moved to 2D graphics, and now have dazzling 3D imagery. Spatial intelligence will drive the change. Eventually, he says, “You could take your favorite book, throw it into a model, and then you literally step into it and watch it play out in real time, in an immersive way,” he says. The first step to making that happen, Casado and Li agreed, is moving from large language models to large world models.

    Li began assembling a team, with Johnson as a cofounder. Casado suggested two more people—one was Christoph Lassner, who had worked at Amazon, Meta’s Reality Labs, and Epic Games. He is the inventor of Pulsar, a rendering scheme that led to a celebrated technique called 3D Gaussian Splatting. That sounds like an indie band at an MIT toga party, but it’s actually a way to synthesize scenes, as opposed to one-off objects. Casado’s other suggestion was Ben Mildenhall, who had created a powerful technique called NeRF—neural radiance fields—that transmogrifies 2D pixel images into 3D graphics. “We took real-world objects into VR and made them look perfectly real,” he says. He left his post as a senior research scientist at Google to join Li’s team.

    One obvious goal of a large world model would be imbuing, well, world-sense into robots. That indeed is in World Labs’ plan, but not for a while. The first phase is building a model with a deep understanding of three dimensionality, physicality, and notions of space and time. Next will come a phase where the models support augmented reality. After that the company can take on robotics. If this vision is fulfilled, large world models will improve autonomous cars, automated factories, and maybe even humanoid robots.

    [ad_2]

    Source link

  • Inside Google’s 7-Year Mission to Give AI a Robot Body

    Inside Google’s 7-Year Mission to Give AI a Robot Body

    [ad_1]

    Often during evenings and sometimes weekends, when the robots weren’t busy doing their daily chores, Catie and her impromptu team would gather a dozen or so robots in a large atrium in the middle of X. Flocks of robots began moving together, at times haltingly, yet always in interesting patterns, with what often felt like curiosity and sometimes even grace and beauty. Tom Engbersen is a roboticist from the Netherlands who painted replicas of classic masterpieces in his spare time. He began a side project collaborating with Catie on an exploration of how dancing robots might respond to music or even play an instrument. At one point he had a novel idea: What if the robots became instruments themselves? This kicked off an exploration where each joint on the robot played a sound when it moved. When the base moved it played a bass sound; when a gripper opened and closed it made a bell sound. When we turned on music mode, the robots created unique orchestral scores every time they moved. Whether they were traveling down a hallway, sorting trash, cleaning tables, or “dancing” as a flock, the robots moved and sounded like a new type of approachable creature, unlike anything I had ever experienced.

    This Is Only the Beginning

    In late 2022, the end-to-end versus hybrid conversations were still going strong. Peter and his teammates, with our colleagues in Google Brain, had been working on applying reinforcement learning, imitation learning, and transformers—the architecture behind LLMs—to several robot tasks. They were making good progress on showing that robots could learn tasks in ways that made them general, robust, and resilient. Meanwhile, the applications team led by Benjie was working on taking AI models and using them with traditional programming to prototype and build robot services that could be deployed among people in real-world settings.

    Meanwhile, Project Starling, as Catie’s multi-robot installation ended up being called, was changing how I felt about these machines. I noticed how people were drawn to the robots with wonder, joy, and curiosity. It helped me understand that how robots move among us, and what they sound like, will trigger deep human emotion; it will be a big factor in how, even if, we welcome them into our everyday lives.

    We were, in other words, on the cusp of truly capitalizing on the biggest bet we had made: robots powered by AI. AI was giving them the ability to understand what they heard (spoken and written language) and translate it into actions, or understand what they saw (camera images) and translate that into scenes and objects that they could act on. And as Peter’s team had demonstrated, robots had learned to pick up objects. After more than seven years we were deploying fleets of robots across multiple Google buildings. A single type of robot was performing a range of services: autonomously wiping tables in cafeterias, inspecting conference rooms, sorting trash, and more.

    Which was when, in January 2023, two months after OpenAI introduced ChatGPT, Google shut down Everyday Robots, citing overall cost concerns. The robots and a small number of people eventually landed at Google DeepMind to conduct research. In spite of the high cost and the long timeline, everyone involved was shocked.

    A National Imperative

    In 1970, for every person over 64 in the world, there were 10 people of working age. By 2050, there will likely be fewer than four. We’re running out of workers. Who will care for the elderly? Who will work in factories, hospitals, restaurants? Who will drive trucks and taxis? Countries like Japan, China, and South Korea understand the immediacy of this problem. There, robots are not optional. Those nations have made it a national imperative to invest in robotics technologies.

    Giving AI a body in the real world is both an issue of national security and an enormous economic opportunity. If a technology company like Google decides it cannot invest in “moonshot” efforts like the AI-powered robots that will complement and supplement the workers of the future, then who will? Will the Silicon Valley or other startup ecosystems step up, and if so, will there be access to patient, long-term capital? I have doubts. The reason we called Everyday Robots a moonshot is that building highly complex systems at this scale went way beyond what venture-capital-funded startups have historically had the patience for. While the US is ahead in AI, building the physical manifestation of it—robots—requires skills and infrastructure where other nations, most notably China, are already leading.

    The robots did not show up in time to help my mother. She passed away in early 2021. Our frequent conversations toward the end of her life convinced me more than ever that a future version of what we started at Everyday Robots will be coming. In fact, it can’t come soon enough. So the question we are left to ponder becomes: How does this kind of change and future happen? I remain curious, and concerned.


    Let us know what you think about this article. Submit a letter to the editor at [email protected].

    [ad_2]

    Source link

  • Could This Be the Start of Amazon’s Next Robot Revolution?

    Could This Be the Start of Amazon’s Next Robot Revolution?

    [ad_1]

    In 2012, Amazon quietly acquired a robotics startup called Kiva Systems, a move that dramatically improved the efficiency of its ecommerce operations and kickstarted a wider revolution in warehouse automation.

    Last week, the ecommerce giant announced another deal that could prove similarly profound, agreeing to hire the founders of Covariant, a startup that has been testing ways for AI to automate more of the picking and handling of a wide range of physical objects.

    Covariant may have found it challenging to commercialize AI-infused industrial robots given the high costs and sharp competition involved; the deal, which will also see Amazon license Covariant’s models and data, could bring about another revolution in ecommerce—one that might prove hard for any competitor to match given Amazon’s vast operational scale and data trove.

    The deal is also an example of a Big Tech company acquiring core talent and expertise from an AI startup without actually buying the company outright. Amazon came to a similar agreement with the startup Adept in June. In March, Microsoft struck a deal with Inflection, and in August, Google hired the founders of Character AI.

    Back in the aughts, Kiva developed a way to move products through warehouses by having squat robots lift and carry stocked shelves over to human pickers—a trick that meant workers no longer needed to walk miles every day to find different items. Kiva’s mobile bots were similar to those employed in manufacturing, and the company used clever algorithms to coordinate the movement of thousands of bots in the same physical space.

    Amazon’s mobile robot army grew from around 10,000 in 2013 to 750,000 by 2023, and the sheer scale of the company’s operations meant that it could deliver millions of items faster and cheaper than anyone else.

    As WIRED revealed last year, Amazon has in recent years developed new robotic systems that rely on machine learning to do things like perceive, grab, and sort packed boxes. Again, Amazon is leveraging scale to its advantage, with the training data being gathered as items flow through its facilities helping to improve the performance of different algorithms. The effort has already led to further automation of the work that had previously been done by human workers at some fulfillment centers.

    The one chore that remains stubbornly difficult to mechanize, however, is the physical grasping of products. It requires adaptability to account for things like friction and slippage, and robots will inevitably be confronted with unfamiliar and awkward items among Amazon’s vast inventory.

    Covariant has spent the past few years developing AI algorithms with a more general ability to handle a range of items more reliably. The company was founded in 2020 by Pieter Abbeel, a professor at UC Berkeley who has done pioneering work on applying machine learning to robotics, along with several of his students, including Peter Chen, who became Covariant’s CEO, and Rocky Duan, the company’s CTO. This week’s deal will see all three of them, along with several research scientists at the startup, join Amazon.

    “Covariant’s models will be used to power some of the robotic manipulation systems across our fulfillment network,” Alexandra Miller, an Amazon spokesperson, tells WIRED. The tech giant declined to reveal financial details of the deal.

    Abbeel was an early employee at OpenAI, and his company has taken inspiration from the story of ChatGPT’s success. In March, Covariant demonstrated a chat interface for its robot and said it had developed a foundation model for robotic grasping, meaning an algorithm designed to become

    [ad_2]

    Source link

  • The Japanese Robot Controversy Lurking in Israel’s Military Supply Chain

    The Japanese Robot Controversy Lurking in Israel’s Military Supply Chain

    [ad_1]

    Japan, for example, makes it relatively easy to export dual-use technologies to the United States and Europe, and vice versa. Because they are recognized as trusted countries under Japanese export law, companies in those states are generally free to use Japanese dual-use technology to produce arms—and to, in turn, export those arms to other states (subject to their own export controls).

    This, itself, has drawn the BDS activists’ ire: They want FANUC to end its relationship with American defense contractors like General Dynamics and Lockheed Martin, which sell considerable advanced weaponry to Israel. “We demand that such business relationships be immediately terminated and that the two companies never do business with each other again,” Imano said in June. But the activists go further, arguing that FANUC is, despite what it says publicly, actually doing business with Israeli defense firms.

    “FANUC sells its robots and provides maintenance and inspection services to Israeli military companies such as Elbit Systems,” Imano claimed.

    FANUC has denied this charge. “When we sell products to Israel, we carry out the necessary transaction screening in accordance with Japan’s Foreign Exchange and Foreign Trade Act, confirm the user’s business activities and intended use, and do not sell to Israel if the products are for military use,” the company wrote to HuffPost.

    The company added that, after reviewing their records of the past five years, “we have not sold any products for military use to the Israeli companies Elbit Systems, IAI, BSEL, Rosenshine Plast, or AMI from our company or our European subsidiary. We have also not sold any products for military use to other Israeli companies from our company or our European subsidiary.” The company identified one instance where one of their robotic arms had been sold to an Israeli company that produces military hardware “after confirming that the machine was to be used for civilian medical purposes.”

    At the same time, the company admitted that when they sell through intermediaries, of which Israel has several, they are not always able to guarantee “who the final customer is.”

    There is, however, ample evidence that suggests FANUC arms have made their way into the Israel defense manufacturing sector. Multiple job listings posted by Elbit Systems, the primary domestic supplier of the Israel Defense Forces, list “knowledge of FANUC … controls” as either an advantage to job applicants or a requirement. One such job listing, from June, comes from Elbit Cyclone, the division that won a contract to produce fuselage components for the F-35 fighter jet. In January, Israel’s Ministry of Defense published a video showing a FANUC robotic arm at an Elbit factory, handling munitions.

    Another Israeli company, Bet Shemesh Engines (BSEL), more than a decade ago created marketing videos and uploaded photos to their company website featuring the FANUC robotic arms. The CV of a former employee suggests the company used FANUC robotics to assemble aircraft engines, which may be used for civilian rather than military purposes. Bet Shemesh counts the Israeli Air Force as a major client.

    [ad_2]

    Source link

  • A glob of jelly can play Pong thanks to a basic kind of memory

    A glob of jelly can play Pong thanks to a basic kind of memory

    [ad_1]

    Pong is a simple video game

    INTERFOTO/Alamy

    An inanimate glob of ion-laced jelly can play the computer game Pong and even improve over time. Researchers plan further experiments to explore whether it can handle more complex computations and hope it could eventually be used to control robots.

    Inspired by previous research that used brain cells in a dish to play Pong, Vincent Strong and his colleagues at the University of Reading, UK, decided to try playing the tennis-like game with an even simpler material. They took a polymer material containing water and laced it with ions to make it responsive to electrical stimuli. When electricity is passed through the material, those ions move to the source of the current, dragging water with them and causing the gel to swell.

    In an experiment, the researchers used a standard computer to run a game of Pong and passed current into different points on the hydrogel with a three-by-three grid of electrodes to represent the ball moving. A second grid of electrodes measured the concentration of ions in the hydrogel, which was interpreted by the computer as instructions on where to move the paddle to.


    The team found that the hydrogel could not only play the game, but that, with practice, its accuracy improved by up to 10 per cent and the length of rallies increased.

    The hydrogel swells faster than it shrinks, while its rate of swelling slows even as the electrical current remains constant. The researchers say that these properties create a rudimentary sort of memory, as signs of the swelling remain recorded in the gel.

    “Instead of it just knowing what’s immediately happened, it has a memory of the ball’s motion over the entirety of the game,” says Strong. “So it sort of gains an experience of the ball’s general motion, not just its current position. It sort of becomes a black-box neural network that has a memory of the ball’s behaviour, how it behaves and how it moves.”

    A polymer gel sandwiched between electrodes that supply current and measure ion levels

    Vincent Strong et al. 2024

    Strong says the hydrogel is vastly less complex than neurons in a brain, but the experiment proves it is capable of similar tasks. He believes it could be used to develop new algorithms for normal computers that achieve tasks using the bare minimum of resources, allowing more efficient problem-solving. But it could also be an analogue computer in its own right.

    “I won’t rule out having a hydrogel thing inside the brain of robots,” says Strong. “That sounds cool, and I’d like to see it. Although, the practicality… we don’t know yet.”

    Topics:

    [ad_2]

    Source link

  • Advancing piezoelectric sensors to monitor robotic movement

    Advancing piezoelectric sensors to monitor robotic movement

    [ad_1]

    Flexible piezoelectric sensors are essential to monitor the motions of both humans and humanoid robots.

    However, existing piezoelectric designs are either costly or have limited sensitivity.

    In a recent study, researchers from Japan tackled these issues by developing a novel piezoelectric composite material made from electrospun polyvinylidene fluoride nanofibers combined with dopamine.

    Sensors made from this material showed significant performance and stability improvements at a low cost, promising advancements in medicine, healthcare, and robotics.

    Flexible sensors are crucial for advancing modern robotics

    The world is accelerating rapidly towards the intelligent era—a stage in history marked by increased automation and interconnectivity by leveraging technologies such as artificial intelligence and robotics.

    As a sometimes-overlooked foundational requirement in this transformation, sensors represent an essential interface between humans, machines, and their environment.

    However, now that robots are becoming more agile and wearable electronics are no longer confined to science fiction, traditional silicon-based sensors won’t be suitable for many applications.

    Therefore, flexible sensors, which provide better comfort and higher versatility, have become a very active area of study.

    Piezoelectric sensors are particularly important in this regard, as they can convert mechanical stress and stretch into an electrical signal. Despite numerous promising approaches, there remains a lack of environmentally sustainable methods for mass-producing flexible, high-performance piezoelectric sensors at a low cost.

    Could electrospinning address flexibility issues?

    The proposed flexible sensor design involves the stepwise electrospinning of a composite 2D nanofiber membrane.

    First, polyvinylidene fluoride (PVDF) nanofibers with diameters in the order of 200 nm are spun, forming a strong uniform network that acts as the base for the piezoelectric sensor. Then, ultrafine PVDF nanofibres with diameters smaller than 35 nm are spun onto the preexisting base.

    These fibres become automatically interweaved between the gaps of the base network, creating a particular 2D topology.

    After characterisation via experiments, simulations, and theoretical analyses, the researchers found that the resulting composite PVDF network had enhanced beta crystal orientation.

    By enhancing this polar phase, which is responsible for the piezoelectric effect observed in PVDF materials, the piezoelectric performance of the sensors was significantly improved.

    Testing the sensors in wearable devices

    These exceptional qualities were demonstrated practically using wearable sensors to measure a wide variety of movements and actions.

    Given the potential low-cost mass production of these piezoelectric sensors, combined with their use of environmentally friendly organic materials instead of harmful inorganics, this study could have important technological implications not only for health monitoring and diagnostics but also for robotics.

    Professor Ick Soo Kim, who led the study, commented: “Considering high-tech sensors are currently being used to monitor robot motions, our proposed nanofiber-based superior piezoelectric sensors hold much potential not only for monitoring human movements but also in the field of humanoid robotics.”

    To make the adoption of these sensors easier, the research team will be focusing on improving the material’s electrical output properties so that flexible electronic components can be driven without the need for an external power source.

    [ad_2]

    Source link

  • Dutch police trial AI-powered robot dog to safely inspect drug labs

    Dutch police trial AI-powered robot dog to safely inspect drug labs

    [ad_1]

    Spot robotic dogs have a range of applications

    CTK/Alamy

    Dutch police are planning to use an autonomous robotic dog in drug lab raids to avoid placing officers at risk from criminals, dangerous chemicals and explosions. If tests in mocked-up scenarios go well, the artificial intelligence-powered robot will be deployed in real raids, say police.

    Simon Prins at Politie Nederland, the Dutch police force, has been testing and using robots in criminal investigations for more than two decades, but says they are only now growing capable enough to be practical for more…

    [ad_2]

    Source link

  • Using AI and robotics to accelerate wearable technology

    Using AI and robotics to accelerate wearable technology

    [ad_1]

    Engineers at the University of Maryland (UMD) have developed a model that combines machine learning and collaborative robotics to overcome challenges in the design of materials used in wearable technology.

    The accelerated method to create aerogel materials used in wearable technology could automate design processes for new materials.

    Despite their simplistic nature, the aerogel assembly line is complex. Researchers rely on time-intensive experiments and experience-based approaches to explore a vast design space and design the materials.

    How robotics and machine learning help overcome the barriers

    To overcome these challenges, the research team combined robotics, machine learning algorithms, and materials science expertise to enable the accelerated design of aerogels with programmable mechanical and electrical properties.

    Their prediction model is built to generate sustainable products with a 95% accuracy rate.

    “Materials science engineers often struggle to adopt machine learning design due to the scarcity of high-quality experimental data,” explained Po-Yen Chen, who led the study.

    “Our workflow, which combines robotics and machine learning, not only enhances data quality and collection rates but also assists researchers in navigating the complex design space of wearable technology.”

    Accelerating aerogel design in wearable technology

    The team’s strong and flexible aerogels were made using conductive titanium nanosheets, as well as naturally occurring components such as cellulose (an organic compound found in plant cells) and gelatine (a collagen-derived protein found in animal tissue and bones).

    machine learning, robotics, aerogel
    © Maryland Engineering

    The team said their tool could also be expanded to meet other applications in aerogel design – such as green technologies used in oil spill clean up, sustainable energy storage, and thermal energy products like insulating windows.

    Eleonora Tubaldi, a collaborator of the study, said: “The blending of these approaches is putting us at the frontier of materials design with tailorable complex properties.

    “We foresee leveraging this new scaleup production platform to design aerogels with unique mechanical, thermal, and electrical properties for harsh working environments.”

    [ad_2]

    Source link

  • The Unitree G1 Is a Short Humanoid Robot That Costs Just $16,000

    The Unitree G1 Is a Short Humanoid Robot That Costs Just $16,000

    [ad_1]

    Does anyone want to buy a humanoid robot for $16,000? The latest product from Unitree hopes that you will: Meet the Unitree G1, a “Humanoid agent AI avatar,” aka a robot. If you haven’t heard of Unitree, it’s sort of the go-to “budget Chinese option” in the robot space. You’re going to have to deal with company promotional materials that are just barely written in English, but you get some impressive bang-for-your-buck robots. You may have seen the Spot knockoff Unitree Go2, a $1,600 robot dog that various resellers have equipped with a flamethrower or just straight-up military rifles.

    Unitree’s promo video shows some impressive capabilities for such a cheap robot. It can stand up on its own from a flat-on-the-floor position. Just like the recent Boston Dynamics Atlas video, the G1 stands up in probably the strangest way possible. While lying face-up on the floor, the G1 brings its knees up, puts its feet flat on the floor, and then pushes up on the feet to form a tripod with the head still on the ground. From there, it uses a limbo-like move to lean its knees forward, bringing up its head and torso with all core strength.

    Person holding the Unitree G1 robot

    Photograph: Unitree

    The G1 is a budget robot, so the walk cycle is kind of primitive. It walks, stands, and “runs” in a permanent half-squat with its legs forward and knees bent all the time. The balance looks great though—at one point a person shows up and roughs up the robot a bit, kicking it in the back and punching it in the chest. In both cases, it absorbs the abuse with just a step back or two and keeps on trucking.

    So, is this humanoid robot … useful? Is it a toy? A big limitation in the real world is its height, a diminutive 4’2″ tall, which will make many tasks difficult. If you ask the usual “Can it do the dishes?” question (assuming the water won’t be an issue), you’re going to first have to hope it can reach the bottom of the sink. It’s going to struggle to reach the bottom shelf of a kitchen cabinet. Maybe you can teach it to use a stool. The small size is key to getting the price down, though. Unitree’s other humanoid robot, the H1, is adult-sized, but it’s also $90,000.

    As for other specs in the confusing and poorly put-together spec sheet, it has a 9,000-mAh battery that lasts two hours. The weight is listed as both 35 kg and 47 kg depending on where you look, so it’s somewhere in the 77- to 104-pound range. We do get real component model numbers for the vision system: an Intel RealSense D435 depth camera and a Livox-MID360 lidar puck. The lidar puck location is interesting. The face of the robot is clear glass, and the head is hollow aside from a, uh, “brain” part at the top of the head. The lidar puck is mounted to the underside of the brain and peers through the front of the face glass to see forward. Robot design is weird.

    The robot can run at 2 meters per second or 4.4 miles per hour. That’s around a slow jog. If “Arm Maximum Load” on the spec sheet is how much it can lift, it can lift 2 kg, or a paltry 4.4 pounds. The joints are all in a 160-  to 310-degree range. You’re going to have to do a lot of programming to make this do anything useful, but Unitree is not very forthcoming about how you’re supposed to do that. Presumably you’ll be using the same Unitree SDK the robot dogs use. You can also poke around the developer documentation for the Unitree H1 to get an idea of what you’ll be in for.

    [ad_2]

    Source link