Tag: content moderation

  • The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’

    [ad_1]

    AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”

    Most of the letter’s signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI “amount to modern day slavery.” The companies did not immediately respond to a request for comment.

    A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.

    The letter’s signatories say their work includes reviewing content on platforms like Facebook, TikTok, and Instagram, as well as labeling images and training chatbot responses for companies like OpenAI that are developing generative-AI technology. The workers are affiliated with the African Content Moderators Union, the first content moderators union on the continent, and a group founded by laid-off workers who previously trained AI technology for companies such as Scale AI, which sells datasets and data-labeling services to clients including OpenAI, Meta, and the US military. The letter was published on the site of the UK-based activist group Foxglove, which promotes tech-worker unions and equitable tech.

    In March, the letter and news reports say, Scale AI abruptly banned people based in Kenya, Nigeria, and Pakistan from working on Remotasks, Scale AI’s platform for contract work. The letter says that these workers were cut off without notice and are “owed significant sums of unpaid wages.”

    “When Remotasks shut down, it took our livelihoods out of our hands, the food out of our kitchens,” says Joan Kinyua, a member of the group of former Remotasks workers, in a statement to WIRED. “But Scale AI, the big company that ran the platform, gets away with it, because it’s based in San Francisco.”

    Though the Biden administration has frequently described its approach to labor policy as “worker-centered.” The African workers’ letter argues that this has not extended to them, saying “we are treated as disposable.”

    “You have the power to stop our exploitation by US companies, clean up this work and give us dignity and fair working conditions,” the letter says. “You can make sure there are good jobs for Kenyans too, not just Americans.”

    Tech contractors in Kenya have filed lawsuits in recent years alleging that tech-outsourcing companies and their US clients such as Meta have treated workers illegally. Wednesday’s letter demands that Biden make sure that US tech companies engage with overseas tech workers, comply with local laws, and stop union-busting practices. It also suggests that tech companies “be held accountable in the US courts for their unlawful operations aboard, in particular for their human rights and labor violations.”

    The letter comes just over a year after 150 workers formed the African Content Moderators Union. Meta promptly laid off all of its nearly 300 Kenya-based content moderators, workers say, effectively busting the fledgling union. The company is currently facing three lawsuits from more than 180 Kenyan workers, demanding more humane working conditions, freedom to organize, and payment of unpaid wages.

    “Everyone wants to see more jobs in Kenya,” Kauna Malgwi, a member of the African Content Moderators Union steering committee, says. “But not at any cost. All we are asking for is dignified, fairly paid work that is safe and secure.”

    [ad_2]

    Source link

  • Eventbrite Promoted Illegal Opioid Sales to People Searching for Addiction Recovery Help

    Eventbrite Promoted Illegal Opioid Sales to People Searching for Addiction Recovery Help

    [ad_1]

    “Listings like these do not have a home on Eventbrite,” Chris Adams, the company’s head of platform product, tells WIRED in a statement. “This is a spam attack, coordinated by a few bad actors attempting to draw audiences to third-party sites.” Adams says Eventbrite is taking the issue “very seriously” and the “identified illegal and illicit activity has been removed.”

    Eventbrite’s help center says it uses a “combination of tools and processes” to detect content that goes against its rules. These include, its pages say, using machine learning to proactively detect content, a “rules-based” system, responding to reports from users, and human reviews.

    “Our investigation determined this is abnormal activity, a misuse of the Eventbrite platform, and based on our findings, Eventbrite did not profit from these listings, and there have been no finalized ticket purchases identified,” Adams says.

    Eventbrite appears to have removed most, if not all, of the illicit listings that WIRED identified after we alerted the company to the issue. Because of the way WIRED collected the data, however, the thousands of listings found on Eventbrite are likely the tip of the iceberg. WIRED obtained the data used for its analysis by collecting listings Eventbrite deemed were “related” to hundreds of events found through simple keyword searches. These keyword searches and their related events likely do not capture the entirety of illicit events published on the platform.

    Even within this limited dataset, our analysis found that, on average, 169 illicit events have been published daily.

    The vast majority of the listings WIRED found used common tactics, whether they pushed drugs, escort services, or online account details. The spammy pages were often listed as online “events.” The events do not actually happen but rather act as a way for those posting them to publish their activities online. Most of them were free; however, some tried to charge people to “attend” through Eventbrite. It is not clear whether anyone has paid for any of the events.

    Searching for various controlled substances, such as brand-name opioids, brought up results on Eventbrite. These “events” mostly pushed people away from the platform to online pharmacy websites, which say people can buy medicines without prescriptions.

    John Hertig, an associate professor at Butler University College of Pharmacy and Health Sciences, says there are thousands of online pharmacies operating at any time and that the vast majority of them are illegal—with websites often selling drugs not approved by the FDA or failing to be licensed in the country where they are selling into.

    “The other major issue that we see in terms of illegality is not requiring a prescription,” Hertig says. “You see a lot of this: ‘easy, hassle free, simple process, no doctor needed.’ That’s illegal.” Typically accounts claiming to sell medicines through non-official platforms, such as those on Eventbrite, will not be doing so legitimately, Hertig says, and that brings risks around whether what they are selling is safe.

    As well as websites, those claiming to sell illicit services on Eventbrite pushed people to chat privately on WhatsApp or Telegram. Our analysis identified as many as 60 unique Telegram accounts and 65 WhatsApp numbers in the dataset. WhatsApp spokesperson Joshua Breckman says the platform encourages users to report suspicious activity and that it will respond to valid law enforcement requests. Telegram did not respond to a request for comment.

    [ad_2]

    Source link

  • Yoel Roth, Twitter’s Former Trust and Safety Chief, Is Trying to Clean Up Your Dating Apps

    Yoel Roth, Twitter’s Former Trust and Safety Chief, Is Trying to Clean Up Your Dating Apps

    [ad_1]

    There are things we can do. When our members tell us that they’ve had a negative interaction, whether it’s any type of physical safety risk, assault, financial fraud, we act on those reports immediately. That’s a lot of what my team is going to be doing. A second critical piece of that is working with law enforcement. In Colombia, around the world, we want to make sure that we are empowering local law enforcement to actually get bad guys off the streets and off of our apps as well. And we are proactively referring relevant information to law enforcement in cases where we think there’s a physical safety hazard.

    I really think the trust and safety industry, collectively, needs to start to approach these as shared problems rather than something that each company handles in isolation. If every company tries to solve a problem independently, you only have line of sight into what’s happening on your platform. We are much more effective when we come together as an industry to address risks.

    One of the things you wrote in your dissertation about dating apps that I found interesting from a design perspective was that—and I’m paraphrasing—you liked the idea of people having more of an open space to express themselves, versus the drop-down options and other preimposed structures within apps. Do you still feel that way? Why does that create a better experience?

    There’s no universal answer to any element of trust and safety or any element of product design. Personally, I think open text fields are better. I like writing, I like expressing myself creatively. But a lot of people don’t want to take the time to think about exactly the right word to explain things, so they’re going to want the option of just entering some of their information and using a drop-down.

    WIRED previously covered the trend of people preferring to use Google Docs instead of dating apps—just putting a link out there, sharing a public doc about themselves, sometimes entire chapters. You really do get a sense of who a person is from that.

    Right, you get a sense that they’re the type of person who writes a Google Doc about their potential dating life. Which, if I were dating right now, I’d probably be a person who writes a Google Doc about that stuff.

    Do you think these apps are really designed to be deleted?

    I do. I met my husband on an app.

    Right, coming from a person who had success! But really, in what way do you think they are actually designed to be deleted when the business model, which relies on people swiping continuously and paying monthly fees for a more “premium” experience, supports something entirely different?

    There are always going to be reasons that people enter or exit the market for dating or relationships. Some people will exit it because they find a partner and are in a monogamous relationship or a marriage and they choose not to meet or date anyone else. There’s also lots of different relationship types and relationships structures. We want to make sure that there are apps available to people at every step on the journey, and it’s going to change over time.

    I think there are lots of moments where people will get what they’re looking for from one of our products, something that enriches their life, and then at a certain point, they’ll say, I got what I wanted from that and I’m ready for something different. Our business model fundamentally is about offering people tools to find connections, and that is going to look very different for people at different points in their life.

    Any last dating tips for people?

    If I had one tip, it’s don’t be afraid to show the weirder elements of your personality. The quirky, esoteric things that really make you who you are are the things that will help you find a match that is going to be exactly right for you.

    [ad_2]

    Source link

  • The Dark Side of Open Source AI Image Generators

    The Dark Side of Open Source AI Image Generators

    [ad_1]

    Whether through the frowning high-definition face of a chimpanzee or a psychedelic, pink-and-red-hued doppelganger of himself, Reuven Cohen uses AI-generated images to catch people’s attention. “I’ve always been interested in art and design and video and enjoy pushing boundaries,” he says—but the Toronto-based consultant, who helps companies develop AI tools, also hopes to raise awareness of the technology’s darker uses.

    “It can also be specifically trained to be quite gruesome and bad in a whole variety of ways,” Cohen says. He’s a fan of the freewheeling experimentation that has been unleashed by open source image-generation technology. But that same freedom enables the creation of explicit images of women used for harassment.

    After nonconsensual images of Taylor Swift recently spread on X, Microsoft added new controls to its image generator. Open source models can be commandeered by just about anyone and generally come without guardrails. Despite the efforts of some hopeful community members to deter exploitative uses, the open source free-for-all is near-impossible to control, experts say.

    “Open source has powered fake image abuse and nonconsensual pornography. That’s impossible to sugarcoat or qualify,” says Henry Ajder, who has spent years researching harmful use of generative AI.

    Ajder says that at the same time that it’s becoming a favorite of researchers, creatives like Cohen, and academics working on AI, open source image generation software has become the bedrock of deepfake porn. Some tools based on open source algorithms are purpose-built for salacious or harassing uses, such as “nudifying” apps that digitally remove women’s clothes in images.

    But many tools can serve both legitimate and harassing use cases. One popular open source face-swapping program is used by people in the entertainment industry and as the “tool of choice for bad actors” making nonconsensual deepfakes, Ajder says. High-resolution image generator Stable Diffusion, developed by startup Stability AI, is claimed to have more than 10 million users and has guardrails installed to prevent explicit image creation and policies barring malicious use. But the company also open sourced a version of the image generator in 2022 that is customizable, and online guides explain how to bypass its built-in limitations.

    Meanwhile, smaller AI models known as LoRAs make it easy to tune a Stable Diffusion model to output images with a particular style, concept, or pose—such as a celebrity’s likeness or certain sexual acts. They are widely available on AI model marketplaces such as Civitai, a community-based site where users share and download models. There, one creator of a Taylor Swift plug-in has urged others not to use it “for NSFW images.” However, once downloaded, its use is out of its creator’s control. “The way that open source works means it’s going to be pretty hard to stop someone from potentially hijacking that,” says Ajder.

    4chan, the image-based message board site with a reputation for chaotic moderation is home to pages devoted to nonconsensual deepfake porn, WIRED found, made with openly available programs and AI models dedicated solely to sexual images. Message boards for adult images are littered with AI-generated nonconsensual nudes of real women, from porn performers to actresses like Cate Blanchett. WIRED also observed 4chan users sharing workarounds for NSFW images using OpenAI’s Dall-E 3.

    That kind of activity has inspired some users in communities dedicated to AI image-making, including on Reddit and Discord, to attempt to push back against the sea of pornographic and malicious images. Creators also express worry about the software gaining a reputation for NSFW images, encouraging others to report images depicting minors on Reddit and model-hosting sites.



    [ad_2]

    Source link

  • Elon Musk’s Lawsuit Against a Group That Found Hate Speech on X Isn’t Going Well

    Elon Musk’s Lawsuit Against a Group That Found Hate Speech on X Isn’t Going Well

    [ad_1]

    Soon after Elon Musk took control of Twitter, now called X, the platform faced a massive problem: Advertisers were fleeing. But that, the company alleges, was someone else’s fault. On Thursday that argument went before a federal judge, who seemed skeptical of the company’s allegations that a nonprofit’s research tracking hate speech on X had compromised user security, and that the group was responsible for the platform’s loss of advertisers.

    The dispute began in July when X filed suit against the Center for Countering Digital Hate, a nonprofit that tracks hate speech on social platforms and had warned that the platform was seeing an increase in hateful content. Musk’s company alleged that CCDH’s reports cost it millions in advertising dollars by driving away business. It also claimed that the nonprofit’s research had violated the platform’s terms of service and endangered users’ security by scraping posts using the login of another nonprofit, the European Climate Foundation.

    In response, CCDH filed a motion to dismiss the case, alleging that it was an attempt to silence a critic of X with burdensome litigation using what’s known as a “strategic lawsuit against public participation,” or SLAPP.

    On Thursday, lawyers for CCDH and X went before Judge Charles Breyer in the Northern California District Court for a hearing to decide whether X’s case against the nonprofit will be allowed to proceed. The outcome of the case could set a precedent for exactly how far billionaires and tech companies can go to silence their critics. “This is really a SLAPP suit disguised as a contractual suit,” says Alejandra Caraballo, clinical instructor at Harvard Law School’s Cyberlaw Clinic.

    Unforeseen Harms

    X alleges that the CCDH used the European Climate Foundation’s login to a social network listening tool called Brandwatch, which has a license to access X data through the company’s API. In the hearing Thursday, X’s attorneys argued that CCDH’s use of the tool had caused the company to spend time and money investigating the scraping, for which it also needed to be compensated on top of payback for how the nonprofit’s report spooked advertisers.

    Judge Breyer pressed X’s attorney, Jonathan Hawk, on that claim, questioning how scraping posts that were publicly available could violate users’ safety or the security of their data. “If [CCDH] had scraped and discarded the information, or scraped that number and never issued a report, or scraped and never told anybody about it. What would be your damages?” Breyer asked X’s legal team.

    Breyer also pointed out that it would have been impossible for anyone agreeing to Twitter’s terms of service in 2019, as the European Climate Foundation did when it signed up for Brandwatch, years before Musk’s purchase of the platform, to anticipate how its policies would drastically change later. He suggested it would be difficult to hold CCDH responsible for harms it could not have foreseen.

    “Twitter had a policy of removing tweets and individuals who engaged in neo-Nazi, white supremacists, misogynists, and spreaders of dangerous conspiracy theories. That was the policy of Twitter when the defendant entered into its terms of service,” Breyer said. “You’re telling me at the time they were excluded from the website, it was foreseeable that Twitter would change its policies and allow these people on? And I am trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

    [ad_2]

    Source link

  • The One Internet Hack That Could Save Everything

    The One Internet Hack That Could Save Everything

    [ad_1]

    The impact on the public sphere has been, to say the least, substantial. In removing so much liability, Section 230 forced a certain sort of business plan into prominence, one based not on uniquely available information from a given service, but on the paid arbitration of access and influence. Thus, we ended up with the deceptively named “advertising” business model—and a whole society thrust into a 24/7 competition for attention. A polarized social media ecosystem. Recommender algorithms that mediate content and optimize for engagement. We have learned that humans are most engaged, at least from an algorithm’s point of view, by rapid-fire emotions related to fight-or-flight responses and other high-stakes interactions. In enabling the privatization of the public square, Section 230 has inadvertently rendered impossible deliberation between citizens who are supposed to be equal before the law. Perverse incentives promote cranky speech, which effectively suppresses thoughtful speech.

    And then there is the economic imbalance. Internet platforms that rely on Section 230 tend to harvest personal data for their business goals without appropriate compensation. Even when data ought to be protected or prohibited by copyright or some other method, Section 230 often effectively places the onus on the violated party through the requirement of takedown notices. That switch in the order of events related to liability is comparable to the difference between opt-in and opt-out in privacy. It might seem like a technicality, but it is actually a massive difference that produces substantial harms. For example, workers in information-related industries such as local news have seen stark declines in economic success and prestige. Section 230 makes a world of data dignity functionally impossible.

    To date, content moderation has too often been beholden to the quest for attention and engagement, regularly disregarding the stated corporate terms of service. Rules are often bent to maximize engagement through inflammation, which can mean doing harm to personal and societal well-being. The excuse is that this is not censorship, but is it really not? Arbitrary rules, doxing practices, and cancel culture have led to something hard to distinguish from censorship for the sober and well-meaning. At the same time, the amplification of incendiary free speech for bad actors encourages mob rule. All of this takes place under Section 230’s liability shield, which effectively gives tech companies carte blanche for a short-sighted version of self-serving behavior. Disdain for these companies—which found a way to be more than carriers, and yet not publishers—is the only thing everyone in America seems to agree on now.

    Trading a known for an unknown is always terrifying, especially for those with the most to lose. Since at least some of Section 230’s network effects were anticipated at its inception, it should have had a sunset clause. It did not. Rather than focusing exclusively on the disruption that axing 26 words would spawn, it is useful to consider potential positive effects. When we imagine a post-230 world, we discover something surprising: a world of hope and renewal worth inhabiting.

    In one sense, it’s already happening. Certain companies are taking steps on their own, right now, toward a post-230 future. YouTube, for instance, is diligently building alternative income streams to advertising, and top creators are getting more options for earning. Together, these voluntary moves suggest a different, more publisher-like self-concept. YouTube is ready for the post-230 era, it would seem. (On the other hand, a company like X, which leans hard into 230, has been destroying its value with astonishing velocity.) Plus, there have always been exceptions to Section 230. For instance, if someone enters private information, there are laws to protect it in some cases. That means dating websites, say, have the option of charging fees instead of relying on a 230-style business model. The existence of these exceptions suggests that more examples would appear in a post-230 world.

    [ad_2]

    Source link

  • A Sudanese Paramilitary Group Accused of Ethnic Cleansing Is Still Tweeting Through It

    A Sudanese Paramilitary Group Accused of Ethnic Cleansing Is Still Tweeting Through It

    [ad_1]

    “It’s difficult to say what the general audience is for the RSF,” says Tessa Knight, a researcher at the DFRLab and author of the report. “But the fact that they have translated a lot of work into English does indicate that they’re likely aware of the fact that people who are looking at their content on Twitter don’t speak Arabic, meaning they’re potentially targeting an international audience.”

    Last year, both YouTube and Meta removed the pages belonging to the RSF and Hemedti on their platforms. YouTube did so after Suliman contacted them. A new Facebook page for the RSF appears to have been started in December, but after WIRED reached out to Meta to ask about the page, Meta removed it. Meta spokesperson Corey Chambliss confirmed to WIRED that “the RSF and its leaders have been removed from our platforms for violating our Dangerous Organizations and Individuals policy.”

    But X, says Suliman, never responded to him. Since taking over the platform in November 2022, Elon Musk has gutted many of the teams responsible for moderation, the work that keeps hate speech, violence, and nudity off the platform, leaving very few people for outside researchers and civil society groups to reach out to.

    According to X’s policies, the platform prohibits “terrorist organizations, violent extremist groups, perpetrators of violent attacks, or individuals who affiliate with and promote their illicit activities.” A former Twitter employee, who asked to remain anonymous, told WIRED that even before Musk’s takeover, an organization like the RSF would have fallen into a gray area for the platform, because the US does not consider the RSF to be a terrorist organization, a large factor in which groups most social platforms, including X, consider dangerous.

    “Twitter allowed some violent organizations on the platform,” the former employee said. “For instance, the Taliban had a Twitter account even before they came to power in 2021.”

    “Individual content would be grounds for removal, and if there’s enough content removed, the account can be taken down,” says the former employee. “But it’s treated generally like any other account until that point.”

    X did not respond to a request for comment.

    At the beginning of the conflict, Knight says, it seemed most of the RSF’s social media presence was geared toward trying to control the international narrative to ”to make it impossible to ascertain what was actually going on.”

    This isn’t the group’s first attempts at scrubbing their image. In 2019, RSF contracted a Canadian public relations firm to polish Hemedti’s image, in addition to helping the new military government firm up new oil contracts and lobby for a meeting with then president Donald Trump.

    “A lot of people, a lot of activists have tried to contact Twitter to have the RSF account removed, in a similar manner to contacting YouTube and contacting Meta,” says Knight. “And they’re still online, so nothing has really come of that.”

    [ad_2]

    Source link