Tag: chatgpt

  • What Scarlett Johansson v. OpenAI Could Look Like in Court

    What Scarlett Johansson v. OpenAI Could Look Like in Court

    [ad_1]

    It doesn’t matter whether a person’s actual voice is used in an imitation or not, Rothman says, only whether that audio confuses listeners. In the legal system, there is a big difference between imitation and simply recording something “in the style” of someone else. “No one owns a style,” she says.

    Other legal experts don’t see what OpenAI did as a clear-cut impersonation. “I think that any potential ‘right of publicity’ claim from Scarlett Johansson against OpenAI would be fairly weak given the only superficial similarity between the ‘Sky’ actress’ voice and Johansson, under the relevant case law,” Colorado law professor Harry Surden wrote on X on Tuesday. Frye, too, has doubts. “OpenAI didn’t say or even imply it was offering the real Scarlett Johansson, only a simulation. If it used her name or image to advertise its product, that would be a right-of-publicity problem. But merely cloning the sound of her voice probably isn’t,” he says.

    But that doesn’t mean OpenAI is necessarily in the clear. “Juries are unpredictable,” Surden added.

    Frye is also uncertain how any case might play out, because he says right of publicity is a fairly “esoteric” area of law. There are no federal right-of-publicity laws in the United States, only a patchwork of state statutes. “It’s a mess,” he says, although Johansson could bring a suit in California, which has fairly robust right-of-publicity laws.

    OpenAI’s chances of defending a right-of-publicity suit could be weakened by a one-word post on X—“her”—from Sam Altman on the day of last week’s demo. It was widely interpreted as a reference to Her and Johansson’s performance. “It feels like AI from the movies,” Altman wrote in a blog post that day.

    To Grimmelmann at Cornell, those references weaken any potential defense OpenAI might mount claiming the situation is all a big coincidence. “They intentionally invited the public to make the identification between Sky and Samantha. That’s not a good look,” Grimmelmann says. “I wonder whether a lawyer reviewed Altman’s ‘her’ tweet.” Combined with Johansson’s revelations that the company had indeed attempted to get her to provide a voice for its chatbots—twice over—OpenAI’s insistence that Sky is not meant to resemble Samantha is difficult for some to believe.

    “It was a boneheaded move,” says David Herlihy, a copyright lawyer and music industry professor at Northeastern University. “A miscalculation.”

    Other lawyers see OpenAI’s behavior as so manifestly goofy they suspect the whole scandal might be a deliberate stunt—that OpenAI judged that it could trigger controversy by going forward with a sound-alike after Johansson declined to participate but that the attention it would receive from seemed to outweigh any consequences. “What’s the point? I say it’s publicity,” says Purvi Patel Albers, a partner at the law firm Haynes Boone who often takes intellectual property cases. “The only compelling reason—maybe I’m giving them too much credit—is that everyone’s talking about them now, aren’t they?”

    [ad_2]

    Source link

  • OpenAI’s latest blunder shows the challenges facing Chinese AI models

    OpenAI’s latest blunder shows the challenges facing Chinese AI models

    [ad_1]

    In fact, among the few long Chinese tokens in GPT-4o that aren’t either pornography or gambling nonsense, two are “socialism with Chinese characteristics” and “People’s Republic of China.” The presence of these phrases suggests that a significant part of the training data actually is from Chinese state media writings, where formal, long expressions are extremely common.

    OpenAI has historically been very tight-lipped about the data it uses to train its models, and it probably will never tell us how much of its Chinese training database is state media and how much is spam. (OpenAI didn’t respond to MIT Technology Review’s detailed questions sent on Friday.)

    But it is not the only company struggling with this problem. People inside China who work in its AI industry agree there’s a lack of quality Chinese text data sets for training LLMs. One reason is that the Chinese internet used to be, and largely remains, divided up by big companies like Tencent and ByteDance. They own most of the social platforms and aren’t going to share their data with competitors or third parties to train LLMs. 

    In fact, this is also why search engines, including Google, kinda suck when it comes to searching in Chinese. Since WeChat content can only be searched on WeChat, and content on Douyin (the Chinese TikTok) can only be searched on Douyin, this data is not accessible to a third-party search engine, let alone an LLM. But these are the platforms where actual human conversations are happening, instead of some spam website that keeps trying to draw you into online gambling.

    The lack of quality training data is a much bigger problem than the failure to filter out the porn and general nonsense in GPT-4o’s token-training data. If there isn’t an existing data set, AI companies have to put in significant work to identify, source, and curate their own data sets and filter out inappropriate or biased content. 

    It doesn’t seem OpenAI did that, which in fairness makes some sense, given that people in China can’t use its AI models anyway. 

    Still, there are many people living outside China who want to use AI services in Chinese. And they deserve a product that works properly as much as speakers of any other language do. 

    How can we solve the problem of the lack of good Chinese LLM training data? Tell me your idea at [email protected].

    [ad_2]

    Source link

  • Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

    Scarlett Johansson Says OpenAI Ripped Off Her Voice for ChatGPT

    [ad_1]

    Last week OpenAI revealed a new conversational interface for ChatGPT with an expressive, synthetic voice strikingly similar to that of the AI assistant played by Scarlett Johansson in the sci-fi movie Her—only to suddenly disable the new voice over the weekend.

    On Monday, Johansson issued a statement claiming to have forced that reversal, after her lawyers demanded OpenAI clarify how the new voice was created.

    Johansson’s statement, relayed to WIRED by her publicist, claims that OpenAI CEO Sam Altman asked her last September to provide ChatGPT’s new voice but that she declined. She describes being astounded to see the company demo a new voice for ChatGPT last week that sounded like her anyway.

    “When I heard the release demo I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the statement reads. It notes that Altman appeared to encourage the world to connect the demo with Johansson’s performance by tweeting out “her,” in reference to the movie, on May 13.

    Johansson’s statement says her agent was contacted by Altman two days before last week’s demo asking that she reconsider her decision not to work with OpenAI. After seeing the demo, she says she hired legal counsel to write to OpenAI asking for details of how it made the new voice.

    The statement claims that this led to OpenAI’s announcement Sunday in a post on X that it had decided to “pause the use of Sky,” the company’s name for the synthetic voice.

    Sky is one of several synthetic voices that OpenAI gave ChatGPT last September, but at last week’s event it displayed a much more lifelike intonation with emotional cues. The demo saw a version of ChatGPT powered by a new AI model called GPT-4o appear to flirt with an OpenAI engineer in a way that many viewers found reminiscent of Johansson’s performance in Her.

    When asked why OpenAI had decided to disable Sky, Niko Felix, an OpenAI spokesperson, referred WIRED to a blog post also from Sunday outlining the process the company went through to choose its voice. “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the post says.

    “For now, we are pausing the use of Sky’s voice while we address some questions, but we hope to bring it back soon,” Felix said.

    The conflict with Johansson adds to OpenAI’s existing battles with artists, writers, and other creatives. The company is already defending a number of lawsuits alleging it inappropriately used copyrighted content to train its algorithms, including suits from The New York Times and authors including George R.R. Martin.

    Generative AI has made it much easier to create realistic synthetic voices, creating new opportunities and threats. In January, voters in New Hampshire were bombarded with robocalls featuring a deepfaked voice message from Joe Biden. In March, OpenAI said that it had developed a technology that could clone someone’s voice from a 15-second clip, but the company said it would not release the technology because of how it might be misused.

    [ad_2]

    Source link

  • OpenAI’s Long-Term AI Risk Team Has Disbanded

    OpenAI’s Long-Term AI Risk Team Has Disbanded

    [ad_1]

    In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power.

    Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts.

    Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board.

    Hours after Sutskever’s departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team’s other colead, posted on X that he had resigned.

    Neither Sutskever nor Leike responded to requests for comment, and they have not publicly commented on why they left OpenAI. Sutskever did offer support for OpenAI’s current path in a post on X. “The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership, he wrote.

    The dissolution of OpenAI’s superalignment team adds to recent evidence of a shakeout inside the company in the wake of last November’s governance crisis. Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an internet forum post in his name.

    Two more OpenAI researchers working on AI policy and governance also appear to have left the company recently. Cullen O’Keefe left his role as research lead on policy frontiers in April, according to LinkedIn. Daniel Kokotajlo, an OpenAI researcher who has coauthored several papers on the dangers of more capable AI models, “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI,” according to a posting on an internet forum in his name. None of the researchers who have apparently left responded to requests for comment.

    OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more powerful models will now be led by John Schulman, who coleads the team responsible for fine-tuning AI models after training.



    [ad_2]

    Source link

  • It’s Time to Believe the AI Hype

    It’s Time to Believe the AI Hype

    [ad_1]

    Folks, when dogs talk, we’re talking Biblical disruption. Do you think that future models will do worse on the law exams?

    If nothing else, this week proves that the rate of AI progress isn’t slowing at all. Just ask the people building these models. “A lot of things have happened—internet, mobile,” says Demis Hassabis, cofounder of DeepMind and now Google’s AI czar, in a post-keynote chat at I/O. “AI is going maybe three or four times faster than those other revolutions. We’re in a period of 25 or 30 years of massive change.” When I asked Google search VP Liz Reid to name a big challenge, she didn’t say it was to keep the innovation going—instead, she cited the difficulty of absorbing the pace of change. “As the technology is early, the biggest challenge is about even what’s possible,” she says. “It’s understanding what the models are great at today, and what they are not great at but will be great at in three months or six months. The technology is changing so fast that you can get two researchers in the room who are working on the same project, and they’ll have totally different views when something is possible.”

    There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. And when non-techies see the products for themselves, they most often become believers too. (Including Joe Biden, after a March 2023 demo of ChatGPT.) That’s why Microsoft is well along on a total AI reinvention, why Mark Zuckerberg is now refocusing Meta to create artificial general intelligence, why Amazon and Apple are desperately trying to keep up, and why countless startups are focusing on AI. And because all of these companies are trying to get an edge, the competitive fervor is ramping up new innovations at a frantic page. Do you think it was a coincidence that OpenAI made its announcement a day before Google I/O?

    Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more. But the AI revolution will change our lives, and change us, for better or worse. And we haven’t even seen GPT-5 yet.

    Image may contain Label Text Symbol and Sign

    Time Travel

    Sure, I could be wrong about AI. But consider the last time I made such a call. In 1995, I joined Newsweek—the same organ where Clifford Stoll had just dismissed the internet as a hoax—and at the end of the year argued of this new digital medium, “This Changes Everything.” Some of my colleagues thought I’d bought into overblown hype. Actually, reality exceeded my hyperbole.

    In 1995, the Internet ruled. You talk about a revolution? For once, the shoe fits. “In the long run it’s hard to exaggerate the importance of the Internet,” says Paul Moritz, a Microsoft VP. “It really is about opening communications to the masses.” And 1995 was the year that the masses started coming. “If you look at the numbers they’re quoting, with the Web doubling every 53 days, that’s biological growth, like a red tide or population of lemmings,” says Kevin Kelly, executive editor of WIRED. “I don’t know if we’ve ever seen technology exhibit that sort of growth.” In fact, there’s a raging controversy over exactly how many people regularly use the Net. A recent Nielsen survey pegged the number at an impressive 24 million North Americans. During the course of the year the discussion of the Internet ranged from sex to stock prices to software standards. But the most significant aspect of the Internet has nothing to do with money or technology, really. It’s us.

    [ad_2]

    Source link

  • Prepare to Get Manipulated by Emotionally Expressive Chatbots

    Prepare to Get Manipulated by Emotionally Expressive Chatbots

    [ad_1]

    The emotional mimicry of OpenAI’s new version of ChatGPT could lead AI assistants in some strange—even dangerous—directions.

    [ad_2]

    Source link

  • OpenAI overtakes Google in race to build the future of AI, but who wants it?

    OpenAI overtakes Google in race to build the future of AI, but who wants it?

    [ad_1]

    Google and OpenAI are vying to develop artificial intelligence models

    SOPA Images/LightRocket via Gett

    When OpenAI released its ChatGPT tool in November 2022, it was a shot across the bows of Google, with generative artificial intelligence promising a new way to access the world’s information beyond search engines. Since then, the rivalry between these firms has only grown, with both announcing new services this week. While there are signs that OpenAI is winning this race, is either company aiming for a future anyone actually wants?

    On 13 May, at a live demonstration event, …

    [ad_2]

    Source link

  • OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    OpenAI’s Chief AI Wizard, Ilya Sutskever, Is Leaving the Company

    [ad_1]

    Ilya Sutskever, cofounder and chief scientist at OpenAI, has left the company. The former Google AI researcher was one of the four board members who voted in November to fire OpenAI CEO Sam Altman, triggering days of chaos that saw staff threaten to quit en masse and Altman ultimately restored.

    Altman confirmed Sutskever’s departure Tuesday in a post on the social platform X. In the months after Altman’s return to OpenAI, Sutskever had rarely made public appearances for the company. On Monday, OpenAI showed off a new version of ChatGPT capable of rapid-fire, emotionally tinged conversation. Sutskever was conspicuously absent from the event, streamed from the company’s San Francisco offices.

    “OpenAI would not be what it is without him,” Altman wrote in his post on Sutskever’s departure. “I am happy that for so long I got to be close to such [a] genuinely remarkable genius, and someone so focused on getting to the best future for humanity.”

    Altman’s post announced that Jakub Pachocki, OpenAI’s research director, would be the company’s new chief scientist. Pachocki has been with OpenAI since 2017.

    In his own post on X, Sutskever acknowledged his departure and hinted at future plans. “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership team, he wrote. “I am excited for what comes next—a project that is very personally meaningful to me about which I will share details in due time.”

    Sutskever has not spoken publicly in detail about his role in the ejection of Altman last year, but after the CEO was restored he expressed regrets. “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI,” he posted on X in November.

    Sutskever blazed a trail in machine learning from an early age, becoming a protégé of deep-learning pioneer Geoffrey Hinton at the University of Toronto. With Hinton and fellow grad student Alex Krizhevsky he cocreated an image-recognition system called AlexNet that stunned the world of AI with its accuracy and helped set off a flurry of investment in the then unfashionable technique of artificial neural networks.

    Sustskever later worked on AI research at Google, where he helped establish the modern era of neural-network-based AI. In 2015 Altman invited him to dinner with Elon Musk and Greg Brockman to talk about the idea of starting a new AI lab to challenge corporate dominance of the technology. Sutskever, Musk, Brockman, and Altman became key founders of OpenAI, which was announced in December 2015. It later pivoted its model, creating a for-profit arm and taking huge investment from Microsoft and other backers. Musk left OpenAI in 2018 after disagreeing with the company’s strategy, and he filed a lawsuit against the company in March this year claiming it had abandoned its founding mission.

    Sutskever’s departure leaves just one of the four OpenAI board members who voted for Altman’s ouster with a role at the company. Adam D’Angelo, an early Facebook employee and CEO of Q&A site Quora, was the only existing member of the board to remain as a director when Altman returned as CEO.



    [ad_2]

    Source link

  • Generative AI Is Totally Shameless. I Want to Be It

    Generative AI Is Totally Shameless. I Want to Be It

    [ad_1]

    AI has a lot of problems. It helps itself to the work of others, regurgitating what it absorbs in a game of multidimensional Mad Libs and omitting all attribution, resulting in widespread outrage and litigation. When it draws pictures, it makes the CEOs white, puts people in awkward ethnic outfits, and has a tendency to imagine women as elfish, with light-colored eyes. Its architects sometimes seem to be part of a death cult that semi-worships a Cthulu-like future AI god, and they focus great energies on supplicating to this immense imaginary demon (thrilling! terrifying!) instead of integrating with the culture at hand (boring, and you get yelled at). Even the more thoughtful AI geniuses seem OK with the idea that an artificial general intelligence is right around the corner, despite 75 years of failed precedent—the purest form of getting high on your own supply.

    So I should reject this whole crop of image-generating, chatting, large-language-model-based code-writing infinite typing monkeys. But, dammit, I can’t. I love them too much. I am drawn back over and over, for hours, to learn and interact with them. I have them make me lists, draw me pictures, summarize things, read for me. Where I work, we’ve built them into our code. I’m in the bag. Not my first hypocrisy rodeo.

    There’s a truism that helps me whenever the new big tech thing has every brain melting: I repeat to myself, “It’s just software.” Word processing was going to make it too easy to write novels, Photoshop looked like it would let us erase history, Bitcoin was going to replace money, and now AI is going to ruin society, but … it’s just software. And not even that much software: Lots of AI models could fit on a thumb drive with enough room left over for the entire run of Game of Thrones (or Microsoft Office). They’re interdimensional ZIP files, glitchy JPEGs, but for all of human knowledge. And yet they serve such large portions! (Not always. Sometimes I ask the AI to make a list and it gives up. “You can do it,” I type. “You can make the list longer.” And it does! What a terrible interface!)

    What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it—with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.                                                                                                      

    As with most people on Earth, shame is a part of my life, installed at a young age and frequently updated with shame service packs. I read a theory once that shame is born when a child expects a reaction from their parents—a laugh, applause—and doesn’t get it. That’s an oversimplification, but given all the jokes I’ve told that have landed flat, it sure rings true. Social media could be understood, in this vein, as a vast shame-creating machine. We all go out there with our funny one-liners and cool pictures, and when no one likes or faves them we feel lousy about it. A healthy person goes, “Ah well, didn’t land. Felt weird. Time to move on.”

    But when you meet shameless people they can sometimes seem like miracles. They have a superpower: the ability to be loathed, to be wrong, and yet to keep going. We obsess over them—our divas, our pop stars, our former presidents, our political grifters, and of course our tech industry CEOs. We know them by their first names and nicknames, not because they are our friends but because the weight of their personalities and influence has allowed them to claim their own domain names in the collective cognitive register.

    Are these shameless people evil, or wrong, or bad? Sure. Whatever you want. Mostly, though, they’re just big, by their own, shameless design. They contain multitudes, and we debate those multitudes. Do they deserve their fame, their billions, their Electoral College victory? We want them to go away but they don’t care. Not one bit. They plan to stay forever. They will be dead before they feel remorse.

    AI is like having my very own shameless monster as a pet. ChatGPT, my favorite, is the most shameless of the lot. It will do whatever you tell it to, regardless of the skills involved. It’ll tell you how to become a nuclear engineer, how to keep a husband, how to invade a country. I love to ask it questions that I’m ashamed to ask anyone else: “What is private equity?” “How can I convince my family to let me get a dog?” It helps me understand what’s happening with my semaglutide injections. It helps me write code—has in fact renewed my relationship with writing code. It creates meaningless, disposable images. It teaches me music theory and helps me write crappy little melodies. It does everything badly and confidently. And I want to be it. I want to be that confident, that unembarrassed, that ridiculously sure of myself.

    [ad_2]

    Source link

  • With OpenAI’s Release of GPT-4o, Is ChatGPT Plus Still Worth It?

    With OpenAI’s Release of GPT-4o, Is ChatGPT Plus Still Worth It?

    [ad_1]

    Barret Zoph, a research lead at OpenAI, was recently demonstrating the new GPT-4o model and its ability to detect human emotions though a smartphone camera, when ChatGPT misidentified his face as a wooden table. After a quick laugh, Zoph assured GPT-4o that he’s not a table and asked the AI tool to take a fresh look at the app’s live video, rather than a photo he shared earlier. “Ah, that makes more sense,” said ChatGPT’s AI voice, before describing his facial expression and potential emotions.

    On Monday, OpenAI launched a new model for ChatGPT that can process text, audio, and images. In a surprising turn, the company announced that this model, GPT-4o, would be available for free, no subscription required. It’s a departure from the company’s previous rollout of GPT-4, which was released in March of last year for those who pay OpenAI’s $20-per-month subscription to ChatGPT Plus. In this current release, many of the features that were previously gated off to paying subscribers, like memory and web browsing, are now rolling out to free users as well.

    Last year, when I tested a nascent version of ChatGPT’s web browsing capability, it had flaws but was powerful enough to make the subscription seem worthwhile for early adopters looking to experiment with the latest technology. Since the freshest AI model from OpenAI, as well as previously gated features, are available without a subscription, you may be wondering if that $20 a month is still worthwhile. Here’s a quick breakdown to help you understand what’s available with OpenAI’s free version versus what you get with ChatGPT Plus.

    What’s Available With Free ChatGPT?

    To reiterate, you don’t need any kind of special subscription to start using the OpenAI GPT-4o model today. Just know that you’re rate limited to fewer prompts per hour than paid users, so make sure to be thoughtful about the questions you pose to the chatbot or you’ll quickly burn through your allotment of prompts.

    In addition to limited GPT-4o access, non-paying users received a major upgrade to their overall user experience, with multiple features that were previously just for paying customers. The GPT Store, where anyone can release a version of ChatGPT with custom instructions, is now widely available. Free users can also use ChatGPT’s web browsing tool and memory features as well as upload photos and files for the chatbot analyze.

    What’s Still Gated to ChatGPT Plus?

    While GPT-4o is available without a subscription, you may want to keep ChatGPT Plus for two reasons: access to more prompts and newer features. “You can use the model significantly more on Plus,” Zoph tells WIRED. “There’s a lot of other exciting, future things to come as well.” Compared to non-subscribers, ChatGPT Plus subscribers are allowed to send GPT-4o five times as many prompts before having to wait or switch to a less powerful model. So, if you want to spend a decent amount of time messaging back and forth with OpenAI’s most powerful option, a subscription is necessary.

    Although some of the previously exclusive features for ChatGPT Plus are rolling out to non-paying users, the splashiest of updates are still offered first behind OpenAI’s paywall. The impressive voice mode that Zoph demonstrated on stage is arriving sometime over the next couple of weeks for ChatGPT Plus subscribers.

    In OpenAI’s demo videos, the bubbly AI voice sounds more playful than previous iterations and is able to answer questions in response to a live video feed. “I honestly think the ways people are going to discover use cases around this is gonna be incredibly creative,” says Zoph. During the presentation, he also showed how the voice mode could be used to translate between English and Italian. After the presentation, the company released another video showing speech translation working in real time.

    [ad_2]

    Source link