Tag: i/o

  • Google Search Is Growing Up

    Google Search Is Growing Up

    [ad_1]

    Google held its annual I/O developer event this week. The company gathered software developers, business partners, and folks from the technology press at Shoreline Amphitheater in Mountain View, California, just down the road from Google corporate headquarters for a two-hour presentation. There were Android announcements, there were chatbot announcements. Somebody even blasted rainbow-colored robes into the crowd using a T-shirt cannon. But most of the talk at I/O centered around artificial intelligence. Nearly everything Google showed off at the event was enhanced in some way by the company’s Gemini AI model. And some of the most shocking announcements came in the realm of AI-powered search, an area where Google is poised to upend everyone’s expectations about how to find things on the internet—for better or for worse.

    This week, WIRED senior writer Paresh Dave joins us to unpack everything Google announced at I/O, and to help us understand how search engines will evolve for the AI era.

    Show Notes

    Read our roundup of everything Google announced at I/O 2024. Lauren wrote about the end of search as we know it. Will Knight got a demo of Project Astra, Google’s visual chatbot. Julian Chokkattu tells us about all the new features coming to Android phones, Wear OS watches, and Google TVs.

    Recommendations

    Michael Calore is @snackfight. Lauren is @LaurenGoode. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

    How to Listen

    You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

    If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here’s the RSS feed.



    [ad_2]

    Source link

  • Everything Google Announced at I/O 2024: Gemini, Search, Project Astra, Scam Detection

    Everything Google Announced at I/O 2024: Gemini, Search, Project Astra, Scam Detection

    [ad_1]

    Google also showed off its new DJ Mode in MusicFX, an AI music generator that lets musicians generate song loops and samples based on prompts. (DJ mode was shown off during the eccentric and delightful performance by musician Mark Rebillet that led into the I/O keynote.)

    An Evolution in Search

    From its humble beginning as a search-focused company, Google is still the most prominent player in the search industry (despite some very good, slightly more private options). Google’s newest AI updates are a seismic shift for its core product.

    New contextual awareness abilities help Google search deliver more relevant results.

    Courtesy of Google

    Some new capabilities include AI-organized search, which allows for more tightly presented and readable search results, as well as the ability to get better responses from longer queries and searches with photos.

    We also saw AI overviews, which are short summaries that pool information from multiple sources to answer the question you entered in the search box. These summaries appear at the top of the results so you don’t even need to go to a website to get the answers you’re seeking. These overviews are already controversial, with publishers and websites fearing that a Google search that answers questions without the user needing to click any links may spell doom for sites that already have to go to extreme lengths to show up in Google’s search results in the first place. Nonetheless, these newly enhanced AI overviews are rolling out to everyone in the US starting today.

    A new feature called Multi step reasoning lets you find several layers of information about a topic when you’re searching for things with some contextual depth. Google used planning a trip as an example, showing how searching in Maps can help find hotels and set transit itineraries. It then went on to suggest restaurants and help with meal planning for the trip. You can deepen the search by looking for specific types of cuisine, or vegetarian options. All of this info is presented to you in an organized way.

    Advanced visual search in Lens.

    Courtesy of Google

    Lastly, we saw a quick demo of how users can rely on Google Lens to answer questions about whatever they’re pointing their camera at. (Yes, this sounds similar to what Project Astra does, but these capabilities are being built into Lens in a slightly different way.) The demo showed a woman trying to get a “broken” turntable to work, but Google identified that the record player’s tonearm simply needed adjusting, and it presented her with a few options for video and text-based instructions on how to do just that. It even properly identified the make and model of the turntable through the camera.

    WIRED’s Lauren Goode talked with Google head of search Liz Reid about all the AI updates coming to Google Search, and what it means for the internet as a whole.

    Security and Safety

    Image may contain Text Business Card Paper and White Board

    Scam Detection in action.

    Photograph: Julian Chokkattu

    One of the last noteworthy things we saw in the keynote was a new scam detection feature for Android, which can listen in on your phone calls and detect any language that sounds like something a scammer would use, like asking you to move money into a different account. If it hears you getting duped, it’ll interrupt the call and give you an on-screen prompt suggesting that you hang up. Google says the feature works on the device, so your phone calls don’t go into the cloud for analysis, making the feature more private. (Also check out WIRED’s guide to protecting yourself and your loved ones from AI scam calls.)

    Google has also expanded its SynthID watermarking tool meant to distinguish media made with AI. This can help you detect misinformation, deepfakes, or phishing spam. The tool leaves an imperceptible watermark that can’t be seen with the naked eye, but can be detected by software that analyzes the pixel-level data in an image. The new updates have expanded the feature to scan content on the Gemini app, on the web, and in Veo-generated videos. Google says it plans to release SynthID as an open-source tool later this summer.

    [ad_2]

    Source link

  • With Gemini on Android, Google Points to Mobile Computing’s Future—and Past

    With Gemini on Android, Google Points to Mobile Computing’s Future—and Past

    [ad_1]

    Nearly a decade ago, Google showed off a feature called Now on Tap in Android Marshmallow—tap and hold the home button and Google will surface helpful contextual information related to what’s on the screen. Talking about a movie with a friend over text? Now on Tap could get you details about the title without having to leave the messaging app. Looking at a restaurant in Yelp? The phone could surface OpenTable recommendations with just a tap.

    I was fresh out of college, and these improvements felt exciting and magical—its ability to understand what was on the screen and predict the actions you might want to take felt future-facing. It was one of my favorite Android features. It slowly morphed into Google Assistant, which was great in its own right, but not quite the same.

    Today, at Google’s I/O developer conference in Mountain View, California, the new features Google is touting in its Android operating system feel like the Now on Tap of old—allowing you to harness contextual information around you to make using your phone a bit easier. Except this time, these features are powered by a decade’s worth of advancements in large language models.

    “I think what’s exciting is we now have the technology to build really exciting assistants,” Dave Burke, vice president of engineering on Android, tells me over a Google Meet video call. “We need to be able to have a computer system that understands what it sees and I don’t think we had the technology back then to do it well. Now we do.”

    I got a chance to speak with Burke and Sameer Samat, president of the Android ecosystem at Google, about what’s new in the world of Android, the company’s new AI assistant Gemini, and what it all holds for the future of the OS. Samat referred to these updates as a “once-in-a-generational opportunity to reimagine what the phone can do, and to rethink all of Android.”

    Circle to Search … Your Homework

    The upgraded Circle to Search in action.

    Courtesy of Google

    It starts with Circle to Search, which is Google’s new way of approaching Search on mobile. Much like the experience of Now on Tap, Circle to Search—which the company debuted a few months ago—is more interactive than just typing into a search box. (You literally circle what you want to search on the screen.) Burke says, “It’s a very visceral, fun, and modern way to search … It skews younger as well because it’s so fun to use.”

    Samat claims Google has received positive feedback from consumers, but Circle to Search’s latest feature hails specifically from student feedback. Circle to Search can now be used on physics and math problems when a user circles them—Google will spit out step-by-step instructions on completing the problems without the user leaving the syllabus app.

    Samat made it clear Gemini wasn’t just providing answers but was showing students how to solve the problems. Later this year, Circle to Search will be able to solve more complex problems like diagrams and graphs. This is all powered by Google’s LearnLM models, which are fine-tuned for education.

    Gemini Gets More Contextual on Android

    Gemini is Google’s AI assistant that is in many ways eclipsing Google Assistant. Really—when you fire up Google Assistant on most Android phones these days, there’s an option to replace it with Gemini instead. So naturally, I asked Burke and Samat whether this meant Assistant was heading to the Google Graveyard.

    “The way to look at it is that Gemini is an opt-in experience on the phone,” Samat says. “I think obviously over time Gemini is becoming more advanced and is evolving. We don’t have anything to announce today, but there is a choice for consumers if they want to opt into this new AI-powered assistant. They can try it out and we are seeing that people are doing that and we’re getting a lot of great feedback.”

    [ad_2]

    Source link