Tag: chatgpt

  • Supremacy review: Riveting exploration of how AI models like ChatGPT changed the world

    Supremacy review: Riveting exploration of how AI models like ChatGPT changed the world

    [ad_1]

    A view shows banners at Tel Aviv University campus as Sam Altman, CEO of Microsoft-backed OpenAI and ChatGPT creator is due to speak in Tel Aviv, Israel June 5, 2023. REUTERS/Amir Cohen - RC2XC1AOM2OY

    Tel Aviv University before a talk from OpenAI CEO Sam Altman in June 2023

    REUTERS/Amir Cohen

    Supremacy
    Parmy Olson (Macmillan Business (UK); St Martin’s Press (US))

    For most people, ChatGPT appeared to materalise out of thin air. Within weeks of OpenAI’s quiet launch of the AI chatbot, it had become the fastest-growing app of all time and, almost two years later, it is nearly as well known as Google or Facebook. In the meantime, companies worldwide have gone gaga for the technology, with little time to pause to consider the wider societal consequences. So how did we get here and who was responsible?…

    [ad_2]

    Source link

  • Using an AI chatbot or voice assistant makes it harder to spot errors

    Using an AI chatbot or voice assistant makes it harder to spot errors

    [ad_1]

    Voice assistants provide information in a casual way

    Edwin Tan/Getty Images

    The conversational tone of an AI chatbot or voice-based assistant seems like a good way to learn about and understand new concepts, but they may actually make us more willing to believe inaccuracies, compared with information presented like a static Wikipedia article.

    To investigate how the way we receive information can change how we perceive it, Sonja Utz at the University of Tübingen, Germany, and her colleagues asked about 1200 participants to engage with one of three formats.

    [ad_2]

    Source link

  • Can ChatGPT-4o Be Trusted With Your Private Data?

    Can ChatGPT-4o Be Trusted With Your Private Data?

    [ad_1]

    Open AI says this data is used to train the AI model and improve its responses, but the terms allow the firm to share your personal information with affiliates, vendors, service providers, and law enforcement. “So it’s hard to know where your data will end up,” says Love.

    OpenAI’s privacy policy states that ChatGPT does collect information to create an account or communicate with a business, says Bharath Thota, a data scientist and chief solutions officer of analytics practice at management consulting firm Kearney, which advises firms on managing and using AI data to power new revenue streams.

    Part of this data collection includes full names, account credentials, payment card information, and transaction history, he says. “Personal information can also be stored, particularly if images are uploaded as part of prompts. Likewise, if a user decides to connect with any of the company’s social media pages like Facebook, LinkedIn, or Instagram, personal information may be collected if they’ve shared their contact details.”

    OpenAI uses consumer data like other big tech and social media companies, but it does not sell advertising. Instead, it provides tools—an important difference, says Jeff Schwartzentruber, senior machine learning scientist at security firm eSentire. “The user input data is not used directly as a commodity. Instead, it is used to improve the services that benefit the user—but it also increases the value of OpenAI’s intellectual property.”

    Privacy Controls

    Since its launch in 2020 and amid criticism and privacy scandals, OpenAI has introduced tools and controls you can use to lock down your data. OpenAI says it is “committed to protecting people’s privacy.”

    For ChatGPT specifically, OpenAI says it understands users may not want their information used to improve its models and therefore provides ways for them to manage their data. “ChatGPT Free and Plus users can easily control whether they contribute to future model improvements in their settings,” the firm writes on its website, adding that it does not train on API, ChatGPT Enterprise, and ChatGPT Team customer data by default.

    “We provide ChatGPT users with a number of privacy controls, including giving them an easy way to opt out of training our AI models and a temporary chat mode that automatically deletes chats on a regular basis,” OpenAI spokesperson Taya Christianson tells WIRED.

    The firm says it does not seek out personal information to train its models, and it does not use public information on the internet to build profiles about people, advertise to them, or target them—or to sell user data.

    OpenAI does not train your models on audio clips from voice chats—unless you choose to share your audio “to improve voice chats for everyone,” the Voice Chat FAQ on OpenAI’s website notes.

    “If you share your audio with us, then we may use audio from your voice chats to train our models,” Open AI says in its Voice Chats FAQ. Meanwhile, transcribed chats may be used to train models depending on your choices and plan.

    [ad_2]

    Source link

  • Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

    Don’t Let Mistrust of Tech Companies Blind You to the Power of AI

    [ad_1]

    Meanwhile, in less visible ways, AI is already changing education, commerce, and the workplace. One friend recently told me about a big IT firm he works with. The company had a lengthy and long-established protocol for launching major initiatives that involved designing solutions, coding up the product, and engineering the rollout. Moving from concept to execution took months. But he recently saw a demo that applied state-of-the-art AI to a typical software project. “All of those things that took months happened in the space of a few hours,” he says. “That made me agree with your column. Tons of the companies that surround us are now animated corpses.” No wonder people are freaked.

    What fuels a lot of the rage against AI is mistrust of the companies building and promoting it. By coincidence I had a breakfast scheduled this week with Ali Farhadi, the CEO of the Allen Institute for AI, a nonprofit research effort. He’s 100 percent convinced that the hype is justified but also empathizes with those who don’t accept it—because, he says, the companies that are trying to dominate the field are viewed with suspicion by the public. “AI has been treated as this black box thing that no one knows about, and it’s so expensive only four companies can do it,” Farhadi says. The fact that AI developers are moving so quickly fuels the distrust even more. “We collectively don’t understand this, yet we’re deploying it,” he says. “I’m not against that, but we should expect these systems will behave in unpredictable ways, and people will react to that.” Fahadi, who is a proponent of open source AI, says that at the least the big companies should publicly disclose what materials they use to train their models.

    Compounding the issue is that many people involved in building AI also pledge their devotion to producing AGI. While many key researchers believe this will be a boon to humanity—it’s the founding principle of OpenAI—they have not made the case to the public. “People are frustrated with the notion that this AGI thing is going to come tomorrow or one year or in six months,” says Farhadi, who is not a fan of the concept. He says AGI is not a scientific term but a fuzzy notion that’s mucking up the adoption of AI. “In my lab when a student uses those three letters, it just delays their graduation by six months,” he says.

    Personally I’m agnostic on the AGI issue—I don’t think we’re on the cusp of it but simply don’t know what will happen in the long run. When you talk to people on the front lines of AI, it turns out that they don’t know, either.

    Some things do seem clear to me, and I think that these will eventually become apparent to all—even those pitching spitballs at me on X. AI will get more powerful. People will find ways to use it to make their jobs and personal lives easier. Also, many folks are going to lose their jobs, and entire companies will be disrupted. It will be small consolation that new jobs and firms might emerge from an AI boom, because some of the displaced people will still be stuck in unemployment lines or cashiering at Walmart. In the meantime, everyone in the AI world—including columnists like me—would do well to understand why people are so enraged, and respect their justifiable discontent.

    [ad_2]

    Source link

  • US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

    US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

    [ad_1]

    Google in public comments to the NTIA ahead of its report said it expects “to see increased attempts to disrupt, degrade, deceive, and steal” models. But it added that its secrets are guarded by a “security, safety, and reliability organization consisting of engineers and researchers with world-class expertise” and that it was working on “a framework” that would involve an expert committee to help govern access to models and their weights.

    Like Google, OpenAI said in comments to the NTIA that there was a need for both open and closed models, depending on the circumstances. OpenAI, which develops models such as GPT-4 and services and apps that build on them, like ChatGPT, last week formed its own security committee on its board and this week published details on its blog about the security of the technology it uses to train models. The blog post expressed hope that the transparency would inspire other labs to adopt protective measures. It didn’t specify from whom the secrets needed protecting.

    Speaking alongside Rice at Stanford, RAND CEO Jason Matheny echoed her concerns about security gaps. By using export controls to limit China’s access to powerful computer chips, the US has hampered Chinese developers’ ability to develop their own models, Matheny said. He claimed that’s increased their need to steal AI software outright.

    By Matheny’s estimate, spending a few million dollars on a cyberattack that steals AI model weights that cost an American company hundreds of billions of dollars to create is well worth it for China. “It’s really hard, and it’s really important, and we’re not investing enough nationally to get that right,” Matheny said.

    China’s embassy in Washington, DC, did not immediately respond to WIRED’s request for comment on theft accusations, but in the past has described such claims as baseless smears by Western officials.

    Google has said that it tipped off law enforcement about the incident that became the US case alleging theft of AI chip secrets for China. While the company has described maintaining strict safeguards to prevent the theft of its proprietary data, court papers show it took considerable time for Google to catch the defendant, Linwei Ding, a Chinese national who has pleaded not guilty to the federal charges.

    The engineer, who also goes by Leon, was hired in 2019 to work on software for Google’s supercomputing data centers, according to prosecutors. Over about a year starting in 2022, he allegedly copied over 500 files with confidential information over to his personal Google account. The scheme worked in part, court papers say, by the employee pasting information into Apple’s Notes app on his company laptop, converting the files to PDFs, and uploading them elsewhere all the while evading Google’s technology meant to catch that sort of exfiltration.

    While engaged in the alleged stealing, the US claims the employee was in touch with the CEO of an AI startup in China and had moved to start his own Chinese AI company. If convicted, he faces up to 10 years in prison.

    [ad_2]

    Source link

  • OpenAI Offers a Peek Inside the Guts of ChatGPT

    OpenAI Offers a Peek Inside the Guts of ChatGPT

    [ad_1]

    ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.

    Today OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devised a way to identify how it stores certain concepts—including those that might perhaps cause an AI system to misbehave.

    Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the long-term risks posed by the technology.

    The former group’s coleads Ilya Sutskever and Jan Leike, both of whom have left the OpenAI, are named as coauthors. Sutskever, a cofounder of the company and formerly chief scientist, was among the board members who voted to fire OpenAI CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.

    ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.

    “Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work write in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models including ChatGPT could perhaps be used to design chemical or biological weapons and coordinate cyber attacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.

    OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.

    OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work and a visualization tool that can be used to see how the words in different sentences activate concepts including profanity and erotic content in GPT-4 and another model. Knowing how a model represents certain concepts could be a step towards being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.

    [ad_2]

    Source link

  • Chatbot Teamwork Makes the AI Dream Work

    Chatbot Teamwork Makes the AI Dream Work

    [ad_1]

    Turning to a friend or coworker can make tricky problems easier to tackle. Now it looks like having AI chatbots team up with each other can make them more effective.

    I’ve been playing this week with AutoGen, an open source software framework for AI agent collaboration developed by researchers at Microsoft and academics at Pennsylvania State University, the University of Washington, and Xidian University in China. The software taps OpenAI’s large language model GPT-4 to let you create multiple AI agents with different personas, roles, and objectives that can be prompted to solve specific problems.

    To put the idea of AI collaboration to the test, I had two AI agents work together on a plan for how to write about AI collaboration.

    By modifying AutoGen’s code I created a “reporter” and “editor” that discussed writing about AI agent collaboration. After talking about the importance of “showcasing how industries such as health care, transportation, retail, and more are using multi-agent AI,” the pair agreed that the proposed piece should dive into the “ethical dilemmas” posed by the technology.

    It’s too early to write much about any of those suggested topics—the concept of multi-agent AI collaboration is mostly at the research phase. But the experiment demonstrated a strategy that can amplify the power of AI chatbots.

    The large language models like those behind ChatGPT often stumble over math problems because they work by providing statistically plausible text rather than rigorous logical reasoning. In a paper presented at an academic workshop in May, the researchers behind AutoGen show that having AI agents collaborate can mitigate that weakness.

    They found that two to four agents working together could solve fifth-grade math problems more reliably than one agent on its own. In their tests, teams were also able to reason out chess problems by talking them through, and they were able to analyze and refine computer code by talking to one another.

    Others have shown similar benefits when several different AI models—even those offered by corporate rivals—team up. In a project presented at the same workshop at a major AI conference called ICLR, a group from MIT and Google got OpenAI’s ChatGPT and Google’s Bard to work together by discussing and debating problems. They found that the duo were more likely to converge on a correct solution to problems together than when the bots worked solo. Another recent paper from researchers at UC Berkeley and the University of Michigan showed that having one AI agent review and critique the work of another could allow the supervising bot to upgrade the other agent’s code, improving its ability to use a computer’s web browser.

    Teams of LLMs can also be prompted to behave in surprisingly humanlike ways. A group from Google, Zhejiang University in China, and the National University of Singapore, found that assigning AI agents distinct personality traits, such as “easy-going” or “overconfident,” can fine-tune their collaborative performance, either positively or negatively.

    And a recent article in The Economist rounds up several multi-agent projects, including one commissioned by the Pentagon’s Defense Advanced Research Projects Agency. In that experiment, a team of AI agents was tasked with searching for bombs hidden within a labyrinth of virtual rooms. While the multi-AI team was better at finding the imaginary bombs than a lone agent, the researchers also found that the group spontaneously developed an internal hierarchy. One agent ended up bossing the others around as they went about their mission.

    Graham Neubig, an associate professor at Carnegie Mellon University, who organized the ICRL workshop, is experimenting with multi-agent collaboration for coding. He says that the collaborative approach can be powerful but also can lead to new kinds of errors, because it adds more complexity. “It’s possible that multi-agent systems are the way to go, but it’s not a foregone conclusion,” Neubig says.

    People are already adapting the open source AutoGen framework in interesting ways, for instance creating simulated writers’ rooms to generate fiction ideas, and a virtual “business-in-a-box” with agents that take on different corporate roles. Perhaps it won’t be too long until the assignment my AI agents came up with needs to be written.

    [ad_2]

    Source link

  • AI Is Your Coworker Now. Can You Trust It?

    AI Is Your Coworker Now. Can You Trust It?

    [ad_1]

    Yet “it doesn’t seem very long before this technology could be used for monitoring employees,” says Elcock.

    Self-Censorship

    Generative AI does pose several potential risks, but there are steps businesses and individual employees can take to improve privacy and security. First, do not put confidential information into a prompt for a publicly available tool such as ChatGPT or Google’s Gemini, says Lisa Avvocato, vice president of marketing and community at data firm Sama.

    When crafting a prompt, be generic to avoid sharing too much. “Ask, ‘Write a proposal template for budget expenditure,’ not ‘Here is my budget, write a proposal for expenditure on a sensitive project,’” she says. “Use AI as your first draft, then layer in the sensitive information you need to include.”

    If you use it for research, avoid issues such as those seen with Google’s AI Overviews by validating what it provides, says Avvocato. “Ask it to provide references and links to its sources. If you ask AI to write code, you still need to review it, rather than assuming it’s good to go.”

    Microsoft has itself stated that Copilot needs to be configured correctly and the “least privilege”—the concept that users should only have access to the information they need—should be applied. This is “a crucial point,” says Prism Infosec’s Robinson. “Organizations must lay the groundwork for these systems and not just trust the technology and assume everything will be OK.”

    It’s also worth noting that ChatGPT uses the data you share to train its models, unless you turn it off in the settings or use the enterprise version.

    List of Assurances

    The firms integrating generative AI into their products say they’re doing everything they can to protect security and privacy. Microsoft is keen to outline security and privacy considerations in its Recall product and the ability to control the feature in Settings > Privacy & security > Recall & snapshots.

    Google says generative AI in Workspace “does not change our foundational privacy protections for giving users choice and control over their data,” and stipulates that information is not used for advertising.

    OpenAI reiterates how it maintains security and privacy in its products, while enterprise versions are available with extra controls. “We want our AI models to learn about the world, not private individuals—and we take steps to protect people’s data and privacy,” an OpenAI spokesperson tells WIRED.

    OpenAI says it offers ways to control how data is used, including self-service tools to access, export, and delete personal information, as well as the ability to opt out of use of content to improve its models. ChatGPT Team, ChatGPT Enterprise, and its API are not trained on data or conversations, and its models don’t learn from usage by default, according to the company.

    Either way, it looks like your AI coworker is here to stay. As these systems become more sophisticated and omnipresent in the workplace, the risks are only going to intensify, says Woollven. “We’re already seeing the emergence of multimodal AI such as GPT-4o that can analyze and generate images, audio, and video. So now it’s not just text-based data that companies need to worry about safeguarding.”

    With this in mind, people—and businesses—need to get in the mindset of treating AI like any other third-party service, says Woollven. “Don’t share anything you wouldn’t want publicly broadcasted.”

    [ad_2]

    Source link

  • Chatbots Are Entering the Stone Age

    Chatbots Are Entering the Stone Age

    [ad_1]

    For all the bluster about generative artificial intelligence upending the world, the technology has yet to meaningfully transform white-collar work. Workers are dabbling with chatbots for tasks such as drafting emails, and companies are launching countless experiments, but office work hasn’t undergone a major AI reboot.

    Perhaps that’s only because we haven’t given chatbots like Google’s Gemini and OpenAI’s ChatGPT the right tools for the job yet; they’re generally restricted to taking in and spitting out text via a chat interface. Things might get more interesting in business settings as AI companies start deploying so-called “AI agents,” which can take action by operating other software on a computer or via the internet.

    Anthropic, a competitor to OpenAI, announced a major new product today that attempts to prove the thesis that tool use is needed for AI’s next leap in usefulness. The startup is allowing developers to direct its chatbot Claude to access outside services and software in order to perform more useful tasks. Claude can, for instance, use a calculator to solve the kinds of math problems that vex large language models; be required to access a database containing customer information; or be compelled to make use of other programs on a user’s computer when it would help.

    I’ve written before about how important AI agents that can take action may prove to be, both for the drive to make AI more useful and the quest to create more intelligent machines. Claude’s tool use is a small step toward the goal of developing these more useful AI helpers being launched into the world right now.

    Anthropic has been working with several companies to help them build Claude-based helpers for their workers. Online tutoring company Study Fetch, for instance, has developed a way for Claude to use different features of its platform to modify the user interface and syllabus content a student is shown.

    Other companies are also entering the AI Stone Age. Google demonstrated a handful of prototype AI agents at its I/O developer conference earlier this month, among many other new AI doodads. One of the agents was designed to handle online shopping returns, by hunting for the receipt in a person’s Gmail account, filling out the return form, and scheduling a package pickup.

    Google has yet to launch its return-bot for use by the masses, and other companies are also moving cautiously. This is probably in part because getting AI agents to behave is tricky. LLMs do not always correctly identify what they are being asked to achieve, and can make incorrect guesses that break the chain of steps needed to successfully complete a task.

    Restricting early AI agents to a particular task or role in a company’s workflow may prove a canny way to make the technology useful. Just as physical robots are typically deployed in carefully controlled environments that minimize the chances they will mess up, keeping AI agents on a tight leash could reduce the potential for mishaps.

    Even those early use cases could prove extremely lucrative. Some big companies already automate common office tasks through what’s known as robotic process automation, or RPA. It often involves recording human workers’ onscreen actions and breaking them into steps that can be repeated by software. AI agents built on the broad capabilities of LLMs could allow a lot more work to be automated. IDC, an analyst firm, says that the RPA market is already worth a tidy $29 billion, but expects an infusion of AI to more than double that to around $65 billion by 2027.

    [ad_2]

    Source link

  • The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    [ad_1]

    AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”

    Most of the letter’s signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI “amount to modern day slavery.” The companies did not immediately respond to a request for comment.

    A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

    The letter’s signatories say their work includes reviewing content on platforms like Facebook, TikTok, and Instagram, as well as labeling images and training chatbot responses for companies like OpenAI that are developing generative-AI technology. The workers are affiliated with the African Content Moderators Union, the first content moderators union on the continent, and a group founded by laid-off workers who previously trained AI technology for companies such as Scale AI, which sells datasets and data-labeling services to clients including OpenAI, Meta, and the US military. The letter was published on the site of the UK-based activist group Foxglove, which promotes tech-worker unions and equitable tech.

    In March, the letter and news reports say, Scale AI abruptly banned people based in Kenya, Nigeria, and Pakistan from working on Remotasks, Scale AI’s platform for contract work. The letter says that these workers were cut off without notice and are “owed significant sums of unpaid wages.”

    “When Remotasks shut down, it took our livelihoods out of our hands, the food out of our kitchens,” says Joan Kinyua, a member of the group of former Remotasks workers, in a statement to WIRED. “But Scale AI, the big company that ran the platform, gets away with it, because it’s based in San Francisco.”

    Though the Biden administration has frequently described its approach to labor policy as “worker-centered.” The African workers’ letter argues that this has not extended to them, saying “we are treated as disposable.”

    “You have the power to stop our exploitation by US companies, clean up this work and give us dignity and fair working conditions,” the letter says. “You can make sure there are good jobs for Kenyans too, not just Americans.”

    Tech contractors in Kenya have filed lawsuits in recent years alleging that tech-outsourcing companies and their US clients such as Meta have treated workers illegally. Wednesday’s letter demands that Biden make sure that US tech companies engage with overseas tech workers, comply with local laws, and stop union-busting practices. It also suggests that tech companies “be held accountable in the US courts for their unlawful operations aboard, in particular for their human rights and labor violations.”

    The letter comes just over a year after 150 workers formed the African Content Moderators Union. Meta promptly laid off all of its nearly 300 Kenya-based content moderators, workers say, effectively busting the fledgling union. The company is currently facing three lawsuits from more than 180 Kenyan workers, demanding more humane working conditions, freedom to organize, and payment of unpaid wages.

    “Everyone wants to see more jobs in Kenya,” Kauna Malgwi, a member of the African Content Moderators Union steering committee, says. “But not at any cost. All we are asking for is dignified, fairly paid work that is safe and secure.”

    [ad_2]

    Source link