Tag: augmented reality

  • Hands On With Google’s Gemini-Powered Smart Glasses, Android XR, and Project Moohan Headset

    Hands On With Google’s Gemini-Powered Smart Glasses, Android XR, and Project Moohan Headset

    [ad_1]

    Naturally, you can work in a mixed-reality environment with a connected Bluetooth keyboard and mouse, and you can put yourself in an immersive environment if you want to focus, or leave see-through mode turned on to make sure your coworkers aren’t taking photos and giggling while you wear a ridiculous headset to get stuff done. It wasn’t clear if you’d be able to connect the headset to a laptop to bring your work into mixed reality, a feature available on the Apple Vision Pro.

    Gemini in XR

    A tap on the side of the headset brings up an app launcher, and this is where you can toggle on Gemini if you want it to persistently stay “on.” Once it’s on, there’s an icon at the top of the virtual space so that you are aware that everything you say and look at is being registered by Gemini.

    In see-through mode, you can walk up to an object and ask Gemini about it—a Googler demoing the headset (before I tried it) walked up to someone else wearing an FC Barcelona shirt and asked Gemini to find the “standings of this team.” Gemini quickly registered the team name and pulled up search results with league standings and scores from recent matches.

    You can ask Gemini anything like this and it will answer with visual results displayed in the headset. I asked it to “take me to Peru,” and it opened up a 3D version of Google Maps. I was able to move around and center on Lima, and in cities where Maps already has a lot of 3D models, you can explore areas in greater detail. You can keep talking to Gemini in these experiences, so I asked questions such as when would be the best time to visit and got a prompt answer.

    Notifications and Google Maps navigation are two of the app interactions that currently work.

    Courtesy of Google

    In another example, I peeked inside a restaurant in New York City to take a virtual tour of the space. Google says it can use AI to stitch together images of a venue’s interior and display it so that it feels like you’re there. It did a pretty good job, and I asked Gemini if the place takes reservations, without having to specifically say the name, because I was staring at the name of the restaurant. It does take reservations, but Gemini couldn’t actually make one for me. (That integration might come later.)

    Next, I watched a few videos on YouTube, where 2D content looks sharp and colorful. Stereoscopic content was even better; my senses felt surrounded. I watched some hikers walking along a trail and asked Gemini where this all was, and it said, “New Zealand.” I wasn’t able to verify that, but it looked like the right answer. I watched some more spatialized playback of 2D videos as the virtual player added depth and layering to make them feel 3D. I hopped over to the Google TV app and enabled a “Cinema mode” to launch a virtual theater for watching movies and shows, just like on other VR headsets.

    Stereoscopic content on YouTube looks great.

    Courtesy of Google

    Circle to Search, the feature Google debuted earlier this year on Android phones, is also available in Android XR. Just walk up to a physical object near you, press the top button on the headset, and then pinch and draw a circle around the thing you want to know more about. You’ll get a Google Search page with results.

    Smart Glasses

    Project Moohan very much feels like Google and Samsung catching up to the rest of the VR market, though the Gemini integration gives their efforts a unique layer. However, I will admit I was far more excited to try the smart glasses, where Gemini feels like it could be even more helpful. They didn’t disappoint. I walked over to another room and there were several pairs of glasses in front of me. Some were sunglasses, others had clear lenses. Like the headset, you can get them loaded up with your prescription. Google did not provide a name for the prototype glasses.

    A press image from Google with someone in the woods while wearing Google's smart sunglasses

    The glasses, which are currently unnamed, will come with clear and tinted lens options.

    Courtesy of Google

    [ad_2]

    Source link

  • An Augmented Reality Program Can Help Patients Overcome Parkinson’s Symptoms

    An Augmented Reality Program Can Help Patients Overcome Parkinson’s Symptoms

    [ad_1]

    In 2018, Tom Finn took his father, Nigel, to a physiotherapy appointment. Nigel was living with vascular dementia, which can present with symptoms similar to Parkinson’s disease, a progressive neurological disorder characterized by motor symptoms such as tremors, stiffness, and trouble balancing. He was losing the ability to walk.

    The physiotherapist told Finn about cue markers—colored lines laid on the floor that can help Parkinson’s patients overcome difficulty walking. Finn was unconvinced. He couldn’t see how some lines on the floor would help his father. But when they got home, he laid some colored exercise bands down in the kitchen and watched in amazement as his dad easily marched back and forth across them.

    The technique, called external cueing, works by using visual, auditory, or tactile prompts—colored tape on the ground, playing a metronome, or physical vibrations—to engage neural pathways not affected by the disease. “It can help people focus their attention and help them take that first step and overcome the freeze,” says Claire Bale, associate director of research at Parkinson’s UK, a research and support charity in the UK.

    While Finn—who worked in marketing and video production in London—was struck by the effectiveness of this simple intervention, he thought it too basic to actually be helpful. But augmented reality glasses from the likes of Magic Leap had just started coming to market, and he wondered whether they might be able to project virtual lines onto the ground to act as cues. He founded a startup, Strolll, to try to make that vision a reality.

    Two years later, Strolll had no staff and about £50 in the bank, according to Jorgen Ellis. Ellis, a New Zealander with a background in furniture startups, had come to the UK looking for his next venture and wanted to get involved with something he felt passionate about. His grandfather had lived with Parkinson’s for over a decade, and when he met Finn through a mutual contact, he immediately saw the promise of the technology. He came onboard as CEO and started by trying to demonstrate that AR-based cueing was scientifically valid.

    Ellis and Finn soon found a group of academics at VU University in Amsterdam, led by Melvyn Roerdink, who were working on something similar. Strolll acquired their intellectual property, and with Roerdink on board as chief innovation officer they began to develop and test the technology, now called Reality DTx.

    Instead of physical bands like Finn used, Strolll’s AR software simulates colored lines on the floor in front of the wearer, with each line disappearing as they clear it. A clinical trial (supported by Strolll) confirmed the cueing technology was feasible and found promising outcomes.

    It could also help with rehabilitation exercises amid a shortage of physiotherapists: The software includes AR games like whack-a-mole and basketball, but designed around functional movements that help people with Parkinson’s. Mark Ross—who was diagnosed with Parkinson’s eight years ago at the age of 36 and is now Strolll’s head of brand and creative strategy—says these games can help overcome the apathy and depression that’s also a symptom of the disease. “You might know that you’ve got to exercise … but that’s not going to help you get off your chair,” he says. So the fact that it’s gamified makes doing the exercises much more alluring.

    The Magic Leap headset the software runs on costs around £3,000 ($3,800), and Strolll charges upwards of £300 a month for its services—but Ellis argues this is more cost-effective than 30 half-hour sessions of in-person physical therapy. Ultimately, the company’s goal is to be the “most used rehabilitation software in the world,” says Ellis. They even have a specific timeline in mind: 7 million minutes of rehab with the Strolll device in a week by New Year’s Eve 2029. By then, Ellis hopes Strolll could be in use for all kinds of neurological conditions, from stroke to multiple sclerosis. There is, he says, an “almost unlimited opportunity.”

    This article appears in the January/February 2025 issue of WIRED UK magazine.

    [ad_2]

    Source link

  • Snap’s AR Spectacles Aren’t as Fancy as Meta’s Orion—but at Least You Can Get Them

    Snap’s AR Spectacles Aren’t as Fancy as Meta’s Orion—but at Least You Can Get Them

    [ad_1]

    At a demo, one game developer showed me a game his company built for the Spectacles. It tracks how far you walk and overlays a gamified grid over the top of your surroundings. As you walk, you collect coins that add up over your route. RPG-style enemies will pop up occasionally too, which you can then fight off with an AR sword that you wield by waving your hand around in real life. You have to hold the sword out directly in front of you in order to keep it within the confines of that narrow field of view, though, so that means walking with a stiff, outstretched arm. The pitch is that you can play this game while walking, which seems to me like a good way to accidentally whack somebody else walking on the sidewalk or get hurt when you chase a coin into traffic.

    Snap encourages wearers to avoid using AR that blocks their vision at times when they shouldn’t be distracted, and to pay attention to their surroundings. But there are no procedures in place on the Spectacles now that send a pop up warning when something is in the way, or prevent people from using the glasses while driving or operating heavy machinery.

    People have been grievously injured while distractedly playing Pokémon Go, but Snap says this is a different use case. Holding your phone directly in front of you to catch a rare Snorlax is a problem because then you’re blocking your vision with a device. The Spectacles let you see the real world at all times, even through the augmented images in front of you. That said, I found that having a hologram in the middle of my vision can definitely be a distraction. When I tried out the walking game, my eyes focused more on the little cartoon collectibles floating around than the actual ground ahead of me.

    This might not be a problem while the Specs are solely at the hands of a few developers. But Snap is moving quickly, and also wants to appeal to a wider array of buyers, likely in an effort to build up its tech before its rivals can run away with the AR prize.

    After all, Meta’s AR efforts seem to be further along than Snap—lighter frames, more robust AI on the backend, and ever-so-slightly less of an off putting look. But there are some key differences between how the companies are trying to push their burgeoning tech forward. Meta’s Orion glasses are actually controlled by three devices—the glasses on your face, a gesture sensing wristband, and a large puck—about the size of a portable charger—that does the bulk of the processing for all the software features. Unlike Meta’s glasses, Snap’s Spectacles are all packed into a single device. That means they are bigger and heavier than the Meta glasses, but also that users won’t have to carry around extra pieces of equipment when they finally make their way into the real world.

    “We think it’s interesting that one of the biggest players in virtual reality agrees with us that the future is wearable, see-through, immersive AR,” Myers says. “Spectacles are quite different from the Orion prototype. They’re unique in that they are real immersive AR glasses that are available now, and Lens Studio developers are already building amazing experiences. Spectacles are completely standalone, with no extra puck or other devices required, and are built on a foundation of proven, commercialized technology that can be produced at scale.”

    Snap’s goal is to make its Spectacles intuitive, easy to use, and easy to wear. It’s going to take a while to get them there, but they’re well on that path to those three points. All they have to do is shave off some weight. Maybe add some color. And keep people from wandering into traffic.

    [ad_2]

    Source link

  • Roundtables: What’s Next for Mixed Reality: Glasses, Goggles, and More

    Roundtables: What’s Next for Mixed Reality: Glasses, Goggles, and More

    [ad_1]

    Recorded on November 19, 2024

    What’s Next for Mixed Reality: Glasses, Goggles, and More.

    Speakers: Mat Honan, Editor in Chief, and James O’Donnell, AI hardware reporter.

    We are barreling toward the next big consumer device category: smart glasses. After years of trying, augmented-reality specs are at last a thing. Facebook recently showed off its Orion smart glasses, and Snap has introduced its second-generation pair. The Pentagon is also working on mixed-reality headsets that can be used on the battlefield. Hear from MIT Technology Review editor in chief Mat Honan and AI hardware reporter James O’Donnell for a conversation about where our AR experiences are heading.

    Related Coverage

    • Palmer Luckey on the Pentagon’s future of mixed reality
    • Here’s what I made of Snap’s new augmented-reality Spectacles.
    • The coolest thing about smart glasses is not the AR. It’s the AI.

    [ad_2]

    Source link

  • Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

    Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

    [ad_1]

    Meta has dominated online social connections for the past 20 years, but it missed out on making the smartphones that primarily delivered those connections. Now, in a multiyear, multibillion-dollar effort to position itself at the forefront of connected hardware, Meta is going all in on computers for your face.

    At its annual Connect developer event today in Menlo Park, California, Meta showed off its new, more affordable Oculus Quest 3S virtual reality headset and its improved, AI-powered Ray-Ban Meta smart glasses. But the headliner was Orion, a prototype pair of holographic display glasses that chief executive Mark Zuckerberg said have been in the works for 10 years.

    Zuckerberg emphasized that the Orion glasses—which are available only to developers for now—aren’t your typical smart display. And he made the case that these kinds of glasses will be so interactive that they’ll usurp the smartphone for many needs.

    “Building this display is different from every other screen you’ve ever used,” Zuckerberg said on stage at Meta Connect. Meta chief technology officer Andrew Bosworth had previously described this tech as “the most advanced thing that we’ve ever produced as a species.”

    The Orion glasses, like a lot of heads-up displays, look like the fever dream of techno-utopians who have been toiling away in a highly secretive place called “Reality Lab” for the past several years. One WIRED reporter noted that the thick black glasses looked “chunky” on Zuckerberg.

    As part of the on-stage demo, Zuckerberg showed how Orion glasses can be used to project multiple virtual displays in front of someone, respond quickly to messages, video chat with someone, and play games. In the messages example, Zuckerberg noted that users won’t even have to take out their phones. They’ll navigate these interfaces by talking, tapping their fingers together, or by simply looking at virtual objects.

    There will also be a “neural interface” built in that can interpret brain signals, using a wrist-worn device that Meta first teased three years ago. Zuckerberg didn’t elaborate on how any of this will actually work or when a consumer version might materialize. (He also didn’t get into the various privacy complications of connecting this rig and its visual AI to one of the world’s biggest repositories of personal data.)

    He did say that the imagery that appears through the Orion glasses isn’t pass-through technology—where external cameras show wearers the real world—nor is it a display or screen that shows the virtual world. It’s a “new kind of display architecture,” he said, that uses projectors in the arms of the glasses to shoot waveguides into the lenses, which then reflect light into the wearer’s eyes and create volumetric imagery in front of you. Meta has designed this technology itself, he said.

    The idea is that the images don’t appear as flat, 2D graphics in front of your eyes but that the virtual images now have shape and depth. “The big innovation with Orion is the field of view,” says Anshel Sag, principal analyst at Moor Insights & Strategy, who was in attendance at Meta Connect. “The field of view is 72 degrees, which makes it much more engaging and useful for most applications, whether gaming, social media, or just content consumption. Most headsets are in the 30- to 50-degree range.”

    [ad_2]

    Source link

  • Meta Teaches Its Ray-Ban Smart Glasses Some New AI Tricks

    Meta Teaches Its Ray-Ban Smart Glasses Some New AI Tricks

    [ad_1]

    The Ray-Ban Meta glasses are the first real artificial intelligence wearable success story. In fact, they are actually quite good. They’ve got that chic Ray-Ban styling, meaning they don’t look as goofy as some of the bulkier, heavier attempts at mixed reality face computers. The on-board AI agent can answer questions, and even identify what you’re looking at using the embedded cameras. People also love using voice commands to capture photos and videos of whatever is right in front of them without whipping out their phone.

    Soon, Meta’s smart glasses are getting some more of these AI-powered voice features. Meta CEO Mark Zuckerberg announced the newest updates to the smart glasses’ software at his company’s Meta Connect event today.

    “The reality is that most of the time you’re not using smart functionality, so people want to have something on their face that they’re proud of and that looks good and that’s, you know, designed in a really nice way,” Zuckerberg said at Connect. “So they’re great glasses. We keep updating the software and building out the ecosystem and they keep on getting smarter and capable of more things.”

    The company also used Connect to announce its new Meta Quest 3S, a more budget-friendly version of its mixed reality headsets. It also unveiled a host of other AI capabilities across its various platforms, with new features being added to its Meta AI and Llama large language models.

    An image of a woman wearing the new RayBan Meta Headliner glasses in Caramel.

    Courtesy of Meta

    An image of a man wearing the new RayBan Meta Wayfarer glasses in Shiny Black.

    Courtesy of Meta

    As far as the Ray-Bans go, Meta isn’t doing too much to mess with a good thing. The smart spectacles got an infusion of AI tech earlier this year, and now Meta is adding more capabilities to the pile, though the enhancements here are pretty minimal. You can already ask Meta AI a question and hear its responses directly from the speakers embedded in the frames’ temple pieces. Now there are a few new things you can ask or command it to do.

    Probably the most impressive is the ability to set reminders. You can look at something while wearing the glasses and say, “Hey, remind me to buy this book next week,” and the glasses will understand what the book is, then set a reminder. In a week, Meta AI will tell you it’s time to buy that book.

    Image may contain Accessories and Sunglasses

    Courtesy of Meta

    Image may contain Accessories Sunglasses and Glasses

    Courtesy of Meta

    Meta says live transcription services are coming to the glasses soon, meaning people speaking in different languages could see transcribed speech in the moment—or at least in a somewhat timely fashion. It’s not clear exactly how well that will work, given that the Meta glasses’ previous written translation abilities have proven to be hit or miss.

    Image may contain Accessories and Sunglasses

    Courtesy of Meta

    Image may contain Accessories Sunglasses and Glasses

    Courtesy of Meta

    There are new frame colors and lens colors being added, and customers now have the option to add transition lenses that increase or decrease their shading depending on the current level of sunlight.

    Meta hasn’t said exactly when these additional AI features will be coming to its Ray-Bans, except that they will arrive sometime this year. With only three months of 2024 left, that means very soon.

    [ad_2]

    Source link

  • Meta Connect 2024: How to Watch and What to Expect

    Meta Connect 2024: How to Watch and What to Expect

    [ad_1]

    Meta Connect, the big developer event and hardware showcase from the company that runs Facebook and Instagram, is kicking off next week. Meta is likely to show off its new VR and mixed-reality technology, put a shiny polish on its meandering metaverse ambitions, and delve into all the fresh ways it plans to squeeze artificial intelligence into every crevice of its devices and services.

    The event takes place on Wednesday September 25, starting at 10 am Pacific time. The keynote address, where most of the new stuff will be announced, will be livestreamed. The host for the event will be Meta CEO and newly minted cool guy Mark Zuckerberg. Zuck’s hour-long presentation will be followed by a developer-focused address at 11 am led by Meta CTO and Reality Labs chief Andrew Bosworth. You can watch the events on the Meta Connect website or on Meta’s YouTube channel. And yes, you can also watch it in VR in Meta Horizon.

    The focus of the event will likely be a fusion of Meta’s mixed-reality efforts and its AI ambitions across its product line. Like any tech event, there are bound to be surprises. Here are the big things to look out for.

    Blurry MetaVision

    The one thing Meta won’t likely be announcing is a very expensive VR headset. It’s a move informed by where the mixed-reality-device market is right now—and whether people actually want to spend big to buy in. Instead, rumors abound about a so-called Meta Quest 3S, a headset which could be a cheaper version of the Meta Quest 3 with lighter features.

    Meta was briefly the bigwig in the AR/VR space 10 years ago when Meta (then Facebook) bought the VR company Oculus. Shortly thereafter, Facebook changed its name to Meta and sank $45 billion into its vision of a digital universe that most people just don’t seem to give much of a damn about. Workplaces aren’t using Meta’s Horizon Workrooms that much—we’re all still on Zoom—and despite the initial bouts of expensive corporate land grabs for digital real estate, users aren’t exactly eager to move into the metaverse.

    Other companies have struggled to find their virtual footing. Apple released its first-mixed reality headset, the $3,500 Apple Vision Pro, in February. Since then, the product has been regarded as a rare misstep for the company, or at least very clearly a first-generation product not intended for the masses. The device didn’t sell very well and was widely criticized as being an expensive, heavy, and ultimately lonely experience. (Apple mentioned the Vision Pro only once, in passing, at its optimistic iPhone announcement event on September 9.)

    Had the Vision Pro’s, well, vision panned out, Meta may have been more inclined to pursue the pricy premium category of VR headset. In August, The Information reported that Meta seems to have abandoned—or at least delayed—plans to reveal an update to its Oculus Quest Pro that would have gone into the ring against Apple’s Vision Pro. Bosworth, Meta’s CTO, responded to that news on Meta’s Threads platform and insisted the move is not that big of a deal, but rather a natural part of the company’s device iterations. Still, it is a move that makes sense in the aftermath of the Apple Vision Pro fizzling out.

    [ad_2]

    Source link

  • Palmer Luckey Is Bringing Anduril Smarts to Microsoft’s Military Headset

    Palmer Luckey Is Bringing Anduril Smarts to Microsoft’s Military Headset

    [ad_1]

    When Palmer Luckey was hacking together virtual reality headsets at his startup Oculus VR in the mid-2010s, he would sometimes imagine a future in which US soldiers used the technology to sharpen their battlefield senses.

    That vision is now virtually a reality after a deal that will bring software from his defense startup, Anduril, to a US Army head-mounted display developed by Microsoft.

    “The idea is to enhance soldiers,” Luckey tells WIRED over Zoom from his home in Newport Beach, California. “Their visual perception, audible perception—basically to give them all the vision that Superman has, and then some, and make them more lethal.”

    Luckey cofounded Anduril in 2017, after selling Oculus VR to Facebook for a reported $2 billion. His new company set out to challenge incumbent defense contractors by moving swiftly and efficiently, focusing more on software, and adapting technologies from the tech industry for military use.

    While known primarily for drones and air defenses, Anduril’s core offering is Lattice, a suite of software that powers those tools and a platform that can integrate with third-party systems. With today’s announcement, Lattice will be implemented in the Integrated Visual Augmentation System headset. Developed by Microsoft for the US military in 2021 and based on the company’s Hololens system, IVAS is an augmented-reality display that blends virtual information with a user’s view of the real world.

    Lattice will surface a lot more live information—pulled from drones, ground vehicles, or aerial defense systems—for soldiers wearing IVAS. This would include data showing the movement of drones and loitering munitions, electronic warfare attacks, and the activities of autonomous systems, Anduril says. It could alert them to incoming drones beyond their visual range that have been detected by an air defense system, for instance.

    Luckey notes that he was far from the first person to envision such futuristic combat scenarios. As is often the case, he drifts between science fiction and reality without much pause. “This is a classic sci-fi concept,” Luckey says. “Robert Heinlein was the one who pioneered the application of heads-up displays as applied to infantry in the 1950s novel Starship Troopers.”

    The Anduril cofounder certainly looks like a new kind of defense tech executive, wearing his customary Hawaiian shirt and sporting a bold hairstyle combo of both a mullet and a goatee. He is, however, quite confident in his ability to shake things up. “I am one of the smartest people in the VR industry, I think,” he says. “And if that sounds arrogant, remember that it takes arrogance to start a company like Anduril.”

    At the time of Anduril’s founding, some people scoffed at the idea of Silicon Valley engineers mastering military technology. But with the Pentagon increasingly keen on low-cost, autonomous, and software-defined systems, Anduril has made a name for itself. The startup recently beat several major companies, including Boeing, Lockheed Martin, and Northrop Grumman, to win a contract to develop an experimental “collaborative” robotic fighter jet for the US Air Force and Navy.

    [ad_2]

    Source link

  • I Wore Meta Ray-Bans in Montreal to Test Their AI Translation Skills. It Did Not Go Well

    I Wore Meta Ray-Bans in Montreal to Test Their AI Translation Skills. It Did Not Go Well

    [ad_1]

    Imagine you’ve just arrived in another country, you don’t speak the language, and you stumble upon a construction zone. The air is thick with dust. You’re tired. You still stink like airplane. You try to ignore the jackhammers to decipher what the signs say: Do you need to cross the street, or walk up another block, or turn around?

    I was in exactly such a situation this week, but I came prepared. I’d flown to Montreal to spend two days testing the new AI translation feature on Meta’s Ray-Ban smart sunglasses. Within 10 minutes of setting out on my first walk, I ran into a barrage of confusing orange detour signs.

    The AI translation feature is meant to give wearers a quick, hands-free way to understand text written in foreign languages, so I couldn’t have devised a better pop quiz on how it works in real time.

    As an excavator rumbled, I looked at a sign and started asking my sunglasses to tell me what it said. Before I could finish, a harried Quebecois construction worker started shouting at me and pointing northwards, and I scurried across the street.

    Right at the start of my AI adventure, I’d run into the biggest limitation of this translation software—it doesn’t, at the moment, tell you what people say. It can only parse the written word.

    I already knew that the feature was writing-only at the moment, so that was no surprise. But soon, I’d run into its other less-obvious constraints. Over the next 48 hours, I tested the AI translation on a variety of street signs, business signs, advertisements, historical plaques, religious literature, children’s books, tourism pamphlets, and menus—with wildly varied results.

    Sometimes it was competent, like when it told me that the book I picked up for my son, Trois Beaux Bébés, was about three beautiful babies. (Correct.) It told me repeatedly that ouvert meant “open,” which, to be frank, I already knew, but I wanted to give it some layups.

    Other times, my robot translator was not up to the task. It told me that the sign for the notorious adult movie theater Cinéma L’Amour translated to … “Cinéma L’Amour.” (F for effort—Google Translate at least changed it to “Cinema Love.”)

    At restaurants, I struggled to get it to read me every item on a menu. For example, instead of telling me all of the different burger options at a brew pub, it simply told me that there were “burgers and sandwiches,” and refused to get more specific despite my wheedling.

    When I went to an Italian spot the next night, it similarly gave me a broad summary of the offerings rather than breaking them down in detail—I was told there were “grilled meat skewers,” but not, for example, that there were duck confit, lamb, and beef options, or how much they cost.

    All in all, right now, the AI translation is more of a temperamental party trick than a genuinely useful travel tool for foreign climes.

    How It Works (or Doesn’t)

    To use the AI translation, a glasses-wearer needs to say the following magic words: “Hey Meta, look at …” and then ask it to translate what it’s looking at.

    Photo of the skyline of Montreal Canada

    Courtesy of Kate Knibbs

    [ad_2]

    Source link

  • Sightful Spacetop G1: Specs, Features, Release Date, Price

    Sightful Spacetop G1: Specs, Features, Release Date, Price

    [ad_1]

    The Spacetop G1 has a similar travel mode that will let you keep using the machine when you’re moving in some type of transportation. I couldn’t leave the building with the demo unit, so Berliner hilariously steered the office chair I was sitting on around the room, and I watched as my virtual screens indeed stayed right above my keyboard. There was a good deal of jitter, but Berliner says it was due to the, you know, fact that I was being rolled around in an office chair. I’ll try it out properly on a plane or car in the fall.

    It wouldn’t be a new hardware launch in 2024 without mentioning AI. Berliner says there will be an AI button on the keyboard in the final version, and it’ll be able to offer context based on what’s on your virtual screens or what’s happening in the physical space around you. He didn’t say much more about it, but expect a few announcements around AI over the coming months.

    The AR Laptop

    When I think of augmented reality computing, I think of sleek glasses and … nothing else. With the Spacetop G1, you still have to carry a laptop-sized machine—one that weighs a little over 3 pounds (1.4 kg), with a wire coming out the center of the keyboard and running up behind your ear. It’s not quite the lightweight computing future I was thinking of.

    That said, when I used the Apple Vision Pro, one of my favorite features was using it for work, especially while traveling. Being able to sit in a cramped airplane seat and recreate the effect of multiple screens around me made me super productive, and I wrote a 2,000-word story in the air. There’s no screen forced to bend to the will of the reclining chair in front of you.

    I prefer having the custom keyboard and trackpad solution here over the afterthought that is Apple’s input system (a Magic Keyboard and a Magic Trackpad), though I do wish it was all wireless. The G1’s glasses are also much lighter than the Vision Pro, which was tiring after several hours of wear.

    However, I’m not sure if most people are ready or even interested in wearing a computer on their faces. Apple’s Vision Pro had a lackluster launch and it’s barely a point of conversation anymore, mere months after its release. It doesn’t help that the Vision Pro was $3,499, but the Spacetop G1 isn’t cheap. It starts at $1,900—you can buy a more powerful laptop for that kind of money.

    Sightful is venture capital-funded, having raised around $61 million to date, and it’s founded by ex-Magic Leap executives, which was far from a consumer commercial success—the company pivoted to servicing the enterprise sector, which is not something I’d rule out for Sightful.

    While the Spacetop G1 very much seems like the kind of product you’d see me using at a tradeshow like CES, the reality here is that it might take a few more generations before it starts to appeal to most people.


    Special offer for Gear readers: Get WIRED for just $5 ($25 off). This includes unlimited access to WIRED.com, full Gear coverage, and subscriber-only newsletters. Subscriptions help fund the work we do every day.

    [ad_2]

    Source link