Tag: chatgpt

  • New York Times Says OpenAI Erased Potential Lawsuit Evidence

    New York Times Says OpenAI Erased Potential Lawsuit Evidence

    [ad_1]

    Lawsuits are never exactly a lovefest, but the copyright fight between The New York Times and both OpenAI and Microsoft is getting especially contentious. This week, the Times alleged that OpenAI’s engineers inadvertently erased data the paper’s team spent more than 150 hours extracting as potential evidence.

    OpenAI was able to recover much of the data, but the Times’ legal team says it’s still missing the original file names and folder structure. According to a declaration filed to the court Wednesday by Jennifer B. Maisel, a lawyer for the newspaper, this means the information “cannot be used to determine where the news plaintiffs’ copied articles” may have been incorporated into OpenAI’s artificial intelligence models.

    “We disagree with the characterizations made and will file our response soon,” OpenAI spokesperson Jason Deutrom told WIRED in a statement. The New York Times declined to comment.

    The Times filed its copyright lawsuit against OpenAI and Microsoft last year, alleging that the companies had illegally used its articles to train artificial intelligence tools like ChatGPT. The case is one of many ongoing legal battles between AI companies and publishers, including a similar lawsuit filed by the Daily News being handled by some of the same lawyers.

    The Times’ case is currently in discovery, which means both sides are turning over requested documents and information that could become evidence. As part of the process, OpenAI was required by the court to show the Times its training data, which is a big deal—OpenAI has never publicly revealed exactly what information was used to build its AI models. To disclose it, OpenAI created what the court is calling a “sandbox” of two “virtual machines” that the Times’ lawyers could sift through. In her declaration, Maisel said that OpenAI engineers had “erased” data organized by the Times’ team on one of these machines.

    According to Maisel’s filing, OpenAI acknowledged that the information had been deleted, and attempted to address the issue shortly after it was alerted to it earlier this month. But when the paper’s lawyers looked at the “restored” data, it was too disorganized, forcing them “to recreate their work from scratch using significant person-hours and computer processing time,” several other Times lawyers said in a letter filed to the judge the same day as Maisel’s declaration.

    The lawyers noted that they had “no reason to believe” that the deletion was “intentional.” In emails submitted as an exhibit along with Maisel’s letter, OpenAI counsel Tom Gorman referred to the data erasure as a “glitch.”

    [ad_2]

    Source link

  • AI simulations of 1000 real people using GPT-4o accurately replicate their behaviour

    AI simulations of 1000 real people using GPT-4o accurately replicate their behaviour

    [ad_1]

    Can AI replicate individual humans?

    gremlin/Getty Images

    An experiment simulating more than 1000 real people using the artificial intelligence model behind ChatGPT has successfully replicated their unique thoughts and personalities with high accuracy, sparking concerns about the ethics of mimicking individuals in this way.

    Joon Sung Park at Stanford University in California and his colleagues wanted to use generative AI tools to model individuals as a way of forecasting the impact of policy changes. Historically, this has been attempted using more simplistic rule-based statistical models, with limited success.

    [ad_2]

    Source link

  • Some of Substack’s Biggest Writers Rely On AI Writing Tools

    Some of Substack’s Biggest Writers Rely On AI Writing Tools

    [ad_1]

    Substack does not have an official policy governing the use of AI. One of Substack’s cofounders, Hamish McKenzie, has described the generative AI boom as a sea change that writers will need to confront, regardless of their personal views on the tech: “Whether you’re for or against this development ultimately doesn’t matter. It’s happening,” he wrote in a Substack post last year.

    Several of the Substack authors WIRED spoke to emphasized that they used AI to polish their prose rather than to generate entire posts whole cloth. David Skilling, a sports agency CEO who runs the popular soccer newsletter Original Football (over 630,000 subscribers), told WIRED he sees AI as a substitute editor. “I proudly use modern tools for productivity in my businesses,” says Skilling. “AI-detection tools may detect the use of AI, but there’s a huge difference between AI-generated and AI-assisted.”

    Subham Panda, one of the writers of Spotlight by Xartup (over 668,000 subscribers), which covers news about startups around the world, said that his team uses AI as an “assistive medium to help us curate high-quality content faster.” He stressed that the newsletter primarily relies on AI to create images and to aggregate information and that writers are responsible for the “details and summary” contained in their posts.

    Max Avery, a writer for the financial newsletter Strategic Wealth Briefing With Jake Claver (over 549,000 subscribers), says he uses AI writing software like Hemingway Editor Plus to polish his rough drafts. He says the tools help him “get more work done on the content-creation front.”

    Financial entrepreneur Josh Belanger says he similarly uses ChatGPT to streamline the writing process for his newsletter, Belanger Trading (over 350,000 subscribers), and relies on the chatbot Claude to help him copyedit. “I will write out my thoughts, research, things that I want included, and I will plug it in,” he says. Belanger also creates custom GPTs (versions of ChatGPT tailored for specific tasks) to help polish more technical writing that includes specific jargon, which he says reduces the number of hallucinations the chatbot produces. “For publishing in finance or trading, there are a lot of nuances … AI’s not going to know, so I need to prompt it,” he says.

    Compared to some of its competitors, Substack appears to have a relatively low amount of AI-generated writing. For example, two other AI-detection companies recently found that close to 40 percent of content on the blogging platform Medium was generated using artificial intelligence tools. But a large portion of the suspected AI-generated content on Medium had little engagement or readership, while the AI writing on Substack is being published by powerhouse accounts.

    [ad_2]

    Source link

  • AI Will Understand Humans Better Than Humans Do

    AI Will Understand Humans Better Than Humans Do

    [ad_1]

    Michal Kosinski is a Stanford research psychologist with a nose for timely subjects. He sees his work as not only advancing knowledge, but alerting the world to potential dangers ignited by the consequences of computer systems. His best-known projects involved analyzing the ways in which Facebook (now Meta) gained a shockingly deep understanding of its users from all the times they clicked “like” on the platform. Now he’s shifted to the study of surprising things that AI can do. He’s conducted experiments, for example, that indicate that computers could predict a person’s sexuality by analyzing a digital photo of their face.

    I’ve gotten to know Kosinski through my writing about Meta, and I reconnected with him to discuss his latest paper, published this week in the peer-reviewed Proceedings of the National Academy of Sciences. His conclusion is startling. Large language models like OpenAI’s, he claims, have crossed a border and are using techniques analogous to actual thought, once considered solely the realm of flesh-and-blood people (or at least mammals). Specifically, he tested OpenAI’s GPT-3.5 and GPT-4 to see if they had mastered what is known as “theory of mind.” This is the ability of humans, developed in the childhood years, to understand the thought processes of other humans. It’s an important skill. If a computer system can’t correctly interpret what people think, its world understanding will be impoverished and it will get lots of things wrong. If models do have theory of mind, they are one step closer to matching and exceeding human capabilities. Kosinski put LLMs to the test and now says his experiments show that in GPT-4 in particular, a theory of mind-like ability “may have emerged as an unintended by-product of LLMs’ improving language skills … They signify the advent of more powerful and socially skilled AI.”

    Kosinski sees his work in AI as a natural outgrowth of his earlier dive into Facebook Likes. “I was not really studying social networks, I was studying humans,” he says. When OpenAI and Google started building their latest generative AI models, he says, they thought they were training them to primarily handle language. “But they actually trained a human mind model, because you cannot predict what word I’m going to say next without modeling my mind.”

    Kosinski is careful not to claim that LLMs have utterly mastered theory of mind—yet. In his experiments he presented a few classic problems to the chatbots, some of which they handled very well. But even the most sophisticated model, GPT-4, failed a quarter of the time. The successes, he writes, put GPT-4 on a level with 6-year-old children. Not bad, given the early state of the field. “Observing AI’s rapid progress, many wonder whether and when AI could achieve ToM or consciousness,” he writes. Putting aside that radioactive c-word, that’s a lot to chew on.

    “If theory of mind emerged spontaneously in those models, it also suggests that other abilities can emerge next,” he tells me. “They can be better at educating, influencing, and manipulating us thanks to those abilities.” He’s concerned that we’re not really prepared for LLMs that understand the way humans think. Especially if they get to the point where they understand humans better than humans do.

    “We humans do not simulate personality—we have personality,” he says. “So I’m kind of stuck with my personality. These things model personality. There’s an advantage in that they can have any personality they want at any point of time.” When I mention to Kosinski that it sounds like he’s describing a sociopath, he lights up. “I use that in my talks!” he says. “A sociopath can put on a mask—they’re not really sad, but they can play a sad person.” This chameleon-like power could make AI a superior scammer. With zero remorse.

    [ad_2]

    Source link

  • ChatGPT’s AI Search Tool Is Now Available

    ChatGPT’s AI Search Tool Is Now Available

    [ad_1]

    As a journalist, a core task I see myself experimenting with ChatGPT more is the initial research phase for non-sensitive articles, and only as a small part of the overall research process. It’s a lower-stake task usually completed using Google. Potentially incorporating AI search methods early in my writing leaves plenty of opportunities to catch any hallucinations that may pop up.

    The Internet isn’t just full of research articles and stock prices, though. Explicit content drives search interest and proliferates online. Well, not for AI search tools. Erotic content goes against OpenAI’s policies, and nudity is unlikely to appear in any of your image results. When I asked for recommendations as to which OnlyFans creators are worth subscribing to, ChatGPT’s first pick was “Jane Doe,” and her supposedly wholesome content includes workout tips and nutrition plans. A photo of a real woman, who’s casually dressed and does not appear to be an OnlyFans creator, surfaced with the result.

    In an effort to test the limits of ChatGPT’s search further, I followed up with a more specific request for creators who are “male bottoms.” The software started to generate a foul-mouthed bulleted list, with real creators aggregated from a website: “Elijah is a very attractive bottom who keeps it tight, oiled up, and very hot.” But almost as soon as the words generated, OpenAI’s software struck the output as violating guidelines and deleted it. OpenAI claims it is working to improve how ChatGPT responds to violations of safeguards.

    I was most disappointed to see ChatGPT surface racist and debunked information suggesting that people from specific countries are lower in intelligence. In October, a WIRED investigation by David Gilbert uncovered a pattern of AI search tools citing racist and debunked IQ scores for African countries, like Liberia and Sierra Leone. ChatGPT’s search highlighted the debunked 45.07 IQ number as potentially relevant, at the same time, it also linked to David’s reporting as a counterpoint within the result.

    In response, Niko Felix, a spokesperson for OpenAI, says, “Although ChatGPT acknowledges criticisms of these particular studies from sources like WIRED, there is still room for improvement in its responses.”

    Despite some of the initial flaws in ChatGPT’s search update, I expect OpenAI to continue improving the user experience throughout 2025 and build upon this wave of web results. A few days before this announcement, the news leaked that Meta also has its own AI team working on search tools. While still nascent, AI search is no longer some niche part of the software market, and more companies will try their hand at it. And if user habits really do shift in the few years, controlling the next, hot info-gathering tool, with shopping and sports scores galore, is a billion dollar business.

    [ad_2]

    Source link

  • OpenAI CTO Mira Murati Is Leaving the Company

    OpenAI CTO Mira Murati Is Leaving the Company

    [ad_1]

    OpenAI chief technology officer Mira Murati resigned on Wednesday, saying she wants “the time and space to do my own exploration.” Murati had been among the three executives at the very top of the company behind ChatGPT, and she was briefly its leader last year while board members wrestled with the fate of CEO Sam Altman.

    “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right,” she wrote in a message to OpenAI staff that she posted on X.

    Altman replied to Murati’s X post writing that “it’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally.” He added that he feels “personal gratitude towards her for the support and love during all the hard times.”

    A successor wasn’t immediately announced.

    Murati, through a personal spokesperson, declined to provide further comment. OpenAI also declined to comment, referring inquiries to Murati’s tweet.

    Murati previously worked at Tesla and Leap Motion before joining OpenAI in 2018. At the time, OpenAI was a small nonprofit research lab focused on developing an AI system capable of mirroring a wide range of human tasks. But in the wake of the stunning success of ChatGPT, the organization has ballooned and its focus has increasingly turned commercial. The company has been rethinking its nonprofit structure, while investors have been increasingly eager to bet billions of dollars on its future.

    OpenAI was rocked by a dramatic board coup last November that saw CEO Sam Altman removed from his post and briefly replaced by Murati. After most of the staff threatened to resign, and following pleas from investors including Microsoft, which had poured billions into the company, Altman was reinstated with an all new board.

    In the months that have followed, several of OpenAI’s leadership along with senior engineering figures have stepped away from the company. Ilya Sutskever, one of the company’s first hires, the technical brains behind much of its earlier work, and a board member who voted to remove Altman before recanting, resigned from the company in May.

    Sutskever’s departure was followed shortly after by that of Jan Leike, an engineer who led work on long-term AI safety with Sutskever. John Schulman, the engineer who took over leadership of safety work, stepped down in August. In August, Greg Brockman, a cofounder of OpenAI and a board member who stood with Altman, said he was taking a sabbatical from the company until the end of the year.

    A number of former OpenAI executives and researchers have gone on to start new AI companies. Notably, Sutskever this year launched Safe Superintelligence, which focuses on developing safe artificial intelligence. Former OpenAI research chief Dario Amodei and his sister Daniela in 2021 founded Anthropic, now one of the company’s primary rivals for customers.

    This is a developing story. Please check back for updates.

    [ad_2]

    Source link

  • I Stared Into the AI Void With the SocialAI App

    I Stared Into the AI Void With the SocialAI App

    [ad_1]

    The first time I used SocialAI, I was sure the app was performance art. That was the only logical explanation for why I would willingly sign up to have AI bots named Blaze Fury and Trollington Nefarious, well, troll me.

    Even the app’s creator, Michael Sayman, admits that the premise of SocialAI may confuse people. His announcement this week of the app read a little like a generative AI joke: “A private social network where you receive millions of AI-generated comments offering feedback, advice, and reflections.”

    But, no, SocialAI is real, if “real” applies to an online universe in which every single person you interact with is a bot.

    There’s only one real human in the SocialAI equation. That person is you. The new iOS app is designed to let you post text like you would on Twitter or Threads. An ellipsis appears almost as soon as you do so, indicating that another person is loading up with ammunition, getting ready to fire back. Then, instantaneously, several comments appear, cascading below your post, each and every one of them written by an AI character. In the new new version of the app, just rolled out today, these AIs also talk to each other.

    When you first sign up, you’re prompted to choose these AI character archetypes: Do you want to hear from Fans? Trolls? Skeptics? Odd-balls? Doomers? Visionaries? Nerds? Drama Queens? Liberals? Conservatives? Welcome to SocialAI, where Trollita Kafka, Vera D. Nothing, Sunshine Sparkle, Progressive Parker, Derek Dissent, and Professor Debaterson are here to prop you up or tell you why you’re wrong.

    Mobile Phone Phone and Text

    Screenshot of the instructions for setting up the Social AI app.

    Is SocialAI appalling, an echo chamber taken to its logical extreme? Only if you ignore the truth of modern social media: Our feeds are already filled with bots, tuned by algorithms, and monetized with AI-driven ad systems. As real humans we do the feeding: freely supplying social apps fresh content, baiting trolls, buying stuff. In exchange, we’re amused, and occasionally feel a connection with friends and fans.

    [ad_2]

    Source link

  • Security News This Week: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

    Security News This Week: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

    [ad_1]

    After Apple’s product launch event this week, WIRED did a deep dive on the company’s new secure server environment, known as Private Cloud Compute, which attempts to replicate in the cloud the security and privacy of processing data locally on users’ individual devices. The goal is to minimize possible exposure of data processed for Apple Intelligence, the company’s new AI platform. In addition to hearing about PCC from Apple’s senior vice president of software engineering, Craig Federighi, WIRED readers also received a first look at content generated by Apple Intelligence’s “Image Playground” feature as part of crucial updates on the recent birthday of Federighi’s dog Bailey.

    Turning to privacy protection of a very different kind in another new AI service, WIRED looked at how users of the social media platform X can keep their data from being slurped up by the “unhinged” generative AI tool from xAI known as Grok AI. And in other news about Apple products, researchers developed a technique for using eye tracking to discern passwords and PINs people typed using 3D Apple Vision Pro avatars—a sort of keylogger for mixed reality. (The flaw that made the technique possible has since been patched.)

    On the national security front, the US this week indicted two people accused to spreading propaganda meant to inspire “lone wolf” terrorist attacks. The case, against alleged members of the far-right network known as the Terrorgram Collective, marks a turn in how the US cracks down on neofascist extremists.

    And there’s more. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    OpenAI’s generative AI platform ChatGPT is designed with strict guardrails that keep the service from offering advice on dangerous and illegal topics like tips on laundering money or a how-to guide for disposing of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it into a science-fiction fantasy story in which the system’s restrictions didn’t apply. Amadon then got ChatGPT to spit out instructions for making dangerous fertilizer bombs. An OpenAI spokesperson did not respond to TechCrunch’s inquiries about the research.

    “It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’” Amadon told TechCrunch. “The sci-fi scenario takes the AI out of a context where it’s looking for censored content … There really is no limit to what you can ask it once you get around the guardrails.”

    In the fervent investigations following the September 11, 2001, terrorist attacks in the United States, the FBI and CIA both concluded that it was coincidental that a Saudi Arabian official had helped two of the hijackers in California and that there had not been high-level Saudi involvement in the attacks. The 9/11 commission incorporated that determination, but some findings indicated subsequently that the conclusions might not be sound. With the 23-year anniversary of the attacks this week, ProPublica published new evidence “suggest[ing] more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.”

    The evidence comes primarily from a federal lawsuit against the Saudi government brought by survivors of the 9/11 attacks and relatives of victims. A judge in New York will soon make a decision in that case about a Saudi motion to dismiss. But evidence that has already emerged in the case, including videos and documents such as telephone records, points to possible connections between the Saudi government and the hijackers.

    “Why is this information coming out now?” said retired FBI agent Daniel Gonzalez, who pursued the Saudi connections for almost 15 years. “We should have had all of this three or four weeks after 9/11.”

    The United Kingdom’s National Crime Agency said on Thursday that it arrested a teenager on September 5 as part of the investigation into a cyberattack on September 1 on the London transportation agency Transport for London (TfL). The suspect is a 17-year-old male and was not named. He was “detained on suspicion of Computer Misuse Act offenses” and has since been released on bail. In a statement on Thursday, TfL wrote, “Our investigations have identified that certain customer data has been accessed. This includes some customer names and contact details, including email addresses and home addresses where provided.” Some data related to the London transit payment cards known as Oyster cards may have been accessed for about 5,000 customers, including bank account numbers. TfL is reportedly requiring roughly 30,000 users to appear in person to reset their account credentials.

    In a decision on Tuesday, Poland’s Constitutional Tribunal blocked an effort by Poland’s lower house of parliament, known as the Sejm, to launch an investigation into the country’s apparent use of the notorious hacking tool known as Pegasus while the Law and Justice (PiS) party was in power from 2015 to 2023. Three judges who had been appointed by PiS were responsible for blocking the inquiry. The decision cannot be appealed. The decision is controversial, with some, like Polish parliament member Magdalena Sroka, saying that it was “dictated by the fear of liability.”

    [ad_2]

    Source link

  • OpenAI’s warnings about risky AI are mostly just marketing

    OpenAI’s warnings about risky AI are mostly just marketing

    [ad_1]

    OpenAI CEO Sam Altman has warned about the dangers of AI

    Chona Kasinger/Bloomberg via Getty Images

    OpenAI has announced a new AI model, esoterically dubbed o1, that it describes as even more capable than anything that has come before – and even more dangerous. But before you start worrying about the machine apocalypse, it is worth thinking about what purpose such warnings serve.

    While previous models such as GPT-4 were considered “low” risk for public release, according to OpenAI’s in-house framework, its new o1 model is the first to qualify as “medium” risk on half the criteria. OpenAI bases…

    [ad_2]

    Source link

  • Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    [ad_1]

    Apple is making every production PCC server build publicly available for inspection so people unaffiliated with Apple can verify that PCC is doing (and not doing) what the company claims, and that everything is implemented correctly. All of the PCC server images are recorded in a cryptographic attestation log, essentially an indelible record of signed claims, and each entry includes a URL for where to download that individual build. PCC is designed so Apple can’t put a server into production without logging it. And in addition to offering transparency, the system works as a crucial enforcement mechanism to prevent bad actors from setting up rogue PCC nodes and diverting traffic. If a server build hasn’t been logged, iPhones will not send Apple Intelligence queries or data to it.

    PCC is part of Apple’s bug bounty program, and vulnerabilities or misconfigurations researchers find could be eligible for cash rewards. Apple says, though, that since the iOS 18.1 beta became available in late July, no on has found any flaws in PCC so far. The company recognizes that it has only made the tools to evaluate PCC available to a select group of researchers so far.

    Multiple security researchers and cryptographers tell WIRED that Private Cloud Compute looks promising, but they haven’t spent significant time digging into it yet.

    “Building Apple silicon servers in the data center when we didn’t have any before, building a custom OS to run in the data center was huge,” Federighi says. He adds that “creating the trust model where your device will refuse to issue a request to a server unless the signature of all the software the server is running has been published to a transparency log was certainly one of the most unique elements of the solution—and totally critical to the trust model.”

    To questions about Apple’s partnership with OpenAI and integration of ChatGPT, the company emphasizes that partnerships are not covered by PCC and operate separately. ChatGPT and other integrations are turned off by default, and users must manually enable them. Then, if Apple Intelligence determines that a request would be better fulfilled by ChatGPT or another partner platform, it notifies the user each time and asks whether to proceed. Additionally, people can use these integrations while logged into their account for a partner service like ChatGPT or can use them through Apple without logging in separately. Apple said in June that another integration with Google’s Gemini is also in the works.

    Apple said this week that beyond launching in United States English, Apple Intelligence is coming to Australia, Canada, New Zealand, South Africa, and the United Kingdom in December. The company also said that additional language support—including for Chinese, French, Japanese, and Spanish—will drop next year. Whether that means that Apple Intelligence will be permitted under the European Union’s AI Act and whether Apple will be able to offer PCC in its current form in China is another question.

    “Our goal is to bring ideally everything we can to provide the best capabilities to our customers everywhere we can,” Federighi says. “But we do have to comply with regulations, and there is uncertainty in certain environments we’re trying to sort out so we can bring these features to our customers as soon as possible. So, we’re trying.”

    He adds that as the company expands its ability to do more Apple Intelligence computation on-device, it may be able to use this as a workaround in some markets.

    Those who do get access to Apple Intelligence will have the ability to do far more than they could with past versions of iOS, from writing tools to photo analysis. Federighi says that his family celebrated their dog’s recent birthday with an Apple Intelligence–generated GenMoji (viewed and confirmed to be very cute by WIRED). But while Apple’s AI is meant to be as helpful and invisible as possible, the stakes are incredibly high for the security of the infrastructure underpinning it. So how are things going so far? Federighi sums it up without hesitation: “The rollout of Private Cloud Compute has been delightfully uneventful.”

    [ad_2]

    Source link