Tag: porn

  • The Billion-Dollar Adult Streaming Industry Is Fueled by Horrific Labor Abuses

    The Billion-Dollar Adult Streaming Industry Is Fueled by Horrific Labor Abuses

    [ad_1]

    “When we were talking with workers, they just wanted to get back to the cockroaches, how the studio owner charges them for toilet paper or makes them work when they’re on their period. I couldn’t get people to talk to me about platforms, and that’s completely valid because of course you are mad at the guy you know,” Killbride tells WIRED. “But there’s a whole other layer that has been left completely invisible. This is a billion-dollar industry that has been able to excuse itself from rebuke.”

    WIRED attempted to contact BongaCams, Chaturbate, LiveJasmin, and Stripchat to request comment about the research findings. None responded.

    HRW’s report outlines crucial recommendations for improving conditions at both the studio and platform levels. This includes occupational safety standards for studios enforced with regular inspections. Models must be able to take breaks and receive a minimum wage for their work, studio management should not force models to perform specific sex acts or agree that they will perform any act on behalf of the models. Additionally, models should have access to a confidential reporting mechanism so they can notify law enforcement or other authorities about workplace violations.

    Developing recommendations for the platforms themselves is even more nuanced. Killbride says that most if not all of the popular adult streaming platforms have stringent authentication requirements for creating accounts and specifically prohibit studio owners or anyone from accepting terms of service on behalf of someone else. In practice, though, the companies are not doing enough, HRW researchers claim, to offer terms of service in a simple, understandable format in a variety of languages, including Spanish.

    Platforms also need to provide channels through which content creators can report violations and receive a timely response, the researchers say. And, crucially, platforms should establish policies that enable models to take ownership of and transfer their accounts from a studio. Researchers found that the current status quo on many platforms involves policy language that may confuse its users or technical complications that keep content creators say prevents them from being able to assert ownership of their accounts.

    On top of everything else, the stakes are particularly high for account ownership issues, because the researchers found that studios often use “recycled” accounts—those that were authenticated and established by one cammer and then retained by a studio—to circumvent minimum age requirements and stream child sexual abuse material.

    “We found that although the platforms are quite strict and have completely clear policies about not streaming kids, the studios do still manage to hire and stream children using fake IDs or, more commonly, recycled accounts,” Killbride says. “Our research was all with adults, but many people we talked to started streaming as kids when they were 13 to 17.”

    Killbride emphasizes that the situation reflects an important tenet of sex worker advocacy and labor reform in general: Listening to workers about their needs and the protections that would help them do their jobs most effectively and equitably also, simultaneously, protects other vulnerable populations. In this case, by allowing cammers to control and transfer their accounts and their followings, the adult streaming industry could also drastically reduce the prevalence of child sexual abuse material.

    [ad_2]

    Source link

  • OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs

    OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs

    [ad_1]

    One of the more persistent concerns in the age of AI is that the robots will take our jobs. The extent to which this fear is founded remains to be seen, but we’re already witnessing some level of replacement in certain fields. Even niche occupations are in jeopardy. For example, the world of OnlyFans chatters is already getting disrupted.

    What are OnlyFans chatters, you say? Earlier this year, WIRED published a fascinating investigation into the world of gig workers who get paid to impersonate top-earning OnlyFans creators in online chats with their fans. Within the industry, they’re called “chatters.”

    A big part of the appeal of OnlyFans—or so I’m told—is that its creators appear to directly engage with their fans, exchanging messages and sometimes talking for hours. Relationship simulation is as crucial an ingredient to its success, basically, as titillation.

    Of course, a single creator with thousands of ongoing DM conversations has only so many hours in a day. To manage the deluge of amorous messages, it’s become commonplace to outsource the conversations to “chatters” paid to sub in for the actual talent.

    These chatters used to mainly be contractors from the Philippines, Pakistan, India, and other countries with substantially lower wage expectations than the US. But, increasingly, human chatters are getting replaced by AI-generated stand-ins.

    A number of different startups now sell access to these AI chatters and other generative AI tools—and they say business is booming.

    “A lot of creators were like, hey, there’s a need,” says Kunal Anand, the founder of a startup offering an AI OnlyFans chatting service called ChatPersona. “We built our own model with data we got from a lot of creators’ chats.”

    Since launching last year, ChatPersona has around 6,000 customers according to Anand, a mix of individuals and agencies.

    Anand says that ChatPersona doesn’t technically violate OnlyFans’ terms of service because it requires a human in the loop to press “send” on the messages its AI chatters generate. (It has previously been reported that OnlyFans banned the use of AI chatbots although its current terms of service do not mention AI chatters.)

    OnlyFans did not respond to repeated requests for comment.

    The field is already fairly crowded. Some of the better-known tools have on-the-nose names like FlirtFlow, ChatterCharms, and Botly. Another competitor, the relatively generically-named Supercreator, has a suite of AI tools, from AI-generated scripts to an assistant called “Inbox Copilot” that algorithmically sorts simps, moving “spenders” to the top of the list and ignoring “freeloaders.”

    Eden, a former OnlyFans creator who now runs a boutique agency called Heiss Talent (and who would only speak on the record using her first name, citing privacy concerns) is an enthusiastic adopter of this tech. She represents five creators, and says they all use Supercreator’s AI tools. “It’s an insane increase in sales, because you can target people based on their spending,” she says.

    [ad_2]

    Source link

  • The Sticky Dilemmas of Pornhub’s Next Chapter

    The Sticky Dilemmas of Pornhub’s Next Chapter

    [ad_1]

    Videos of minors. Illegal data collection. Lack of oversight. Lawsuits. Problems have dogged the popular porn site for years. Is its promise of transparency enough for a reset?

    [ad_2]

    Source link

  • The Toymaker Who Wants to Be the Next Willy Wonka of Sex Tech

    The Toymaker Who Wants to Be the Next Willy Wonka of Sex Tech

    [ad_1]

    But where Guo, who is 35, sometimes falls short in imagination, he more than makes up for in vigilance. “Users expect and deserve products that meet stringent safety standards, and any deviation can damage a brand’s reputation irrevocably,” he posted in an XBIZ editorial in September. “Partner with trusted white-label manufacturers rather than gamble on the unknowns.”

    When I ask Guo about the editorial, he stresses that the success of sex tech is determined as much by the innovation involved in the products as the quality. “We want to be more of a bridge from human to human,” Guo says, “not just from toy to human.”

    Even with promising market projections—another estimate goes so far as to predict sales could surpass $121 billion by 2030—industry analysts are not convinced that the future of sex tech is in toys.

    It’s a “very oversaturated market that is now avoided by many,” says Olena Petrosyuk, a partner at the consulting firm Waveup. This year, she adds, investors “are looking away from ‘commoditized’ trends”—sex toys, but also sex content and social platforms. “Many failed to prove the economics and scale. The category is still fairly stigmatized,” she says. “OnlyFans being a massive exception.”

    So what do consumers want? Petrosyuk says wellness, AI, and immersive realities are hot right now. “Practically every new sex tech startup is thinking in terms of AI use cases,” she says. “If it’s AI toys—companies are looking into how they can anticipate and respond to the user’s needs. If it’s robotics—we see companies looking into sex bots. If it’s content—it’s hyperpersonalized sex personas.”

    Guo tells me he is not phased by talk of AI sex robots—“a low-volume business,” in his estimation—because many people cannot afford the high price tag. Continued success, he believes, is will come by expanding on the company’s themed collections. OEJ works directly with US and Canadian distributors; it is not a direct-to-consumer business, though he says customers do occasionally order via the online store.

    Although ecommerce is the industry standard in retail and electronics, taking more of an old-school approach works for Guo. Next year, OEJ plans to launch a Zodiac collection, crafting 12 unique toys for each astrological sign. It’s an appeal to the Co–Star fanatics of Gen Z. “Every generation is different,” he says.

    The company’s mostly nonexistent social media presence only seems to add to their Wonka-like mystery. “We’re just bad at it,” Jerry Chen, an operations assistant, says. “We’re really focused on production.”

    For now, that business model seems to be a hit. Our Erotic Journey recently won the “Best Pleasure Product Manufacturer—Small” prize at the 2023–2024 AVN Awards in Las Vegas, a litmus test for newbie brands in the adult content world. OEJ also received the O Award for Outstanding New Product for “Sexy Pot,” Guo’s marijuana-leaf-shaped vibrator, a customer favorite.

    Clearly wanting to capitalize on its unexpected success, Guo says, “It’s time we gave it a sister or brother.”

    [ad_2]

    Source link

  • Could AI and Deepfakes Sway the US Election?

    Could AI and Deepfakes Sway the US Election?

    [ad_1]

    Leah Feiger: All right, that’s a good one. Thank you, Tori. Will, what do you have for us?

    Will Knight: Wow, I don’t know if I can really compete with RFK, but as a good CIA operative, I’m going to promote something from the weirder corners of AI, AI and philosophy, I guess. So there’s this thing called Roko’s basilisk. So the basilisk is a mythological creature serpent that if you looked in its eyes, it could kill you. And so there was this thought experiment someone posted on an AI forum saying that superintelligence in the future would be incentivized to create a simulation in which maybe we all exist inside it, and it would be incentivized to torture anybody who worked against or even thought about the idea of working against it coming into being. So at one point in one of these …

    Leah Feiger: Incredible.

    Will Knight: … forums, they banned talk of Roko’s, this thought experiment, Roko’s basilisk. The idea was that if you even thought about it, it could be dangerous, which is particularly bananas.

    Leah Feiger: That is so funny. What forums is this proliferating on, or not proliferating on?

    Will Knight: This was on LessWrong, which is a very famous forum dedicated to AI risks and alignment and—

    Leah Feiger: How often do you personally think about Roko’s basilisk?

    Will Knight: Well, I actually only discovered it recently, and I try not to think about it just in case. It’s like Pascal’s wager, isn’t it? It’s just sort of playing the odds that superintelligence will come into being, so you have to try and make it come into being. Yeah, it’s completely mad.

    Leah Feiger: Oh, that’s a very good one. OK. Oh, actually, this is a little bit hard this week, but I got to go with Tori. CIA assets, here we go.

    Vittoria Elliott: Finally. Did the Ravens put me over the edge? I must know.

    Leah Feiger: The Ravens did put you over the edge. I liked it, and it was part of, I just saw how much you were working for this, and yeah, it was an A for effort and an A for execution. Good stuff.

    Vittoria Elliott: Thank you.

    Leah Feiger: And partially, I can’t give the win to something that I’m not allowed to think about ever again. Tori and Will, thank you so much for joining us. You were excellent guests.

    Vittoria Elliott: Thanks, Leah.

    Will Knight: Thanks for having me.

    Leah Feiger: Thanks for listening to WIRED Politics Lab. If you like what you heard today, make sure to follow the show and rate it on your podcast app of choice. We also have a newsletter, which Makena Kelly writes each week. The link to the newsletter and the WIRED reporting we mentioned today are in the show notes. If you’d like to get in touch with us with any questions, comments, or show suggestions, please, please write to [email protected]. That’s [email protected]. We’re so excited to hear from you. WIRED Politics Lab is produced by Jake Harper. Pran Bandi is our studio engineer. Amar Lal mixed this episode. Stephanie Kariuki is our executive producer. Chris Bannon is global head of audio at Condé Nast, and I’m your host, Leah Feiger. We’ll be back in your feeds with a new episode next week.

    [ad_2]

    Source link

  • The US Needs Deepfake Porn Laws. These States Are Leading the Way

    The US Needs Deepfake Porn Laws. These States Are Leading the Way

    [ad_1]

    Last year, WIRED reported that deepfake pornography is only increasing, and researchers estimate that 90 percent of deepfake videos are of porn, the vast majority of which is nonconsensual porn of women. But despite how pervasive the issue is, Kaylee Williams, a researcher at Columbia University who has been tracking nonconsensual deepfake legislation, says she has seen legislators more focused on political deepfakes.

    “More states are interested in protecting electoral integrity in that way than they are in dealing with the intimate image question,” she says.

    Matthew Bierlein, a Republican state representative in Michigan, who cosponsored the state’s package of nonconsensual deepfake bills, says that he initially came to the issue after exploring legislation on political deepfakes. “Our plan was to make [political deepfakes] a campaign finance violation if you didn’t put disclaimers on them to notify the public.” Through his work on political deepfakes, Bierlein says, he began working with Democratic representative Penelope Tsernoglou, who helped spearhead the nonconsensual deepfake bills.

    At the time in January, nonconsensual deepfakes of Taylor Swift had just gone viral, and the subject was widely covered in the news. “We thought that the opportunity was the right time to be able to do something,” Beirlein says. And Beirlein says that he felt Michigan was in the position to be a regional leader in the Midwest, because, unlike some of its neighbors, it has a full-time legislature with well-paid staffers (most states don’t). “We understand that it’s a bigger issue than just a Michigan issue. But a lot of things can start at the state level,” he says. “If we get this done, then maybe Ohio adopts this in their legislative session, maybe Indiana adopts something similar, or Illinois, and that can make enforcement easier.”

    But what the penalties for creating and sharing nonconsensual deepfakes are—and who is protected—can vary widely from state to state. “The US landscape is just wildly inconsistent on this issue,” says Williams. “I think there’s been this misconception lately that all these laws are being passed all over the country. I think what people are seeing is that there have been a lot of laws proposed.”

    Some states allow for civil and criminal cases to be brought against perpetrators, while others might only provide for one of the two. Laws like the one that recently took effect in Mississippi, for instance, focus on minors. Over the past year or so, there have been a spate of instances of middle and high schoolers using generative AI to make explicit images and videos of classmates, particularly girls. Other laws focus on adults, with legislators essentially updating existing laws banning revenge porn.

    Unlike laws that focus on nonconsensual deepfakes of minors, on which Williams says there is a broad consensus that there they are an “inherent moral wrong,” legislation around what is “ethical” when it comes to nonconsensual deepfakes of adults is “squishier.” In many cases, laws and proposed legislation require proving intent, that the goal of the person making and sharing the nonconsensual deepfake was to harm its subject.

    [ad_2]

    Source link

  • Google Cracks Down on Explicit Deepfakes

    Google Cracks Down on Explicit Deepfakes

    [ad_1]

    A few weeks ago, a Google search for “deepfake nudes jennifer aniston” brought up at least seven high-up results that purported to have explicit, AI-generated images of the actress. Now they have vanished.

    Google product manager Emma Higham says that new adjustments to how the company ranks results, which have been rolled out this year, have already cut exposure to fake explicit images by over 70 percent on searches seeking that content about a specific person. Where problematic results once may have appeared, Google’s algorithms are aiming to promote news articles and other non-explicit content. The Aniston search now returns articles such as “How Taylor Swift’s Deepfake AI Porn Represents a Threat” and other links like a Ohio attorney general warning about “deepfake celebrity-endorsement scams” that target consumers.

    “With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake Images,” Higham wrote in a company blog post on Wednesday.

    The ranking change follows a WIRED investigation this month that revealed that in recent years Google management rejected numerous ideas proposed by staff and outside experts to combat the growing problem of intimate portrayals of people spreading online without their permission.

    While Google made it easier to request removal of unwanted explicit content, victims and their advocates have urged more proactive steps. But the company has tried to avoid becoming too much of a regulator of the internet or harm access to legitimate porn. At the time, a Google spokesperson said in response that multiple teams were working diligently to bolster safeguards against what it calls nonconsensual explicit imagery (NCEI).

    The widening availability of AI image generators, including some with few restrictions on their use, has led to an uptick in NCEI, according to victims’ advocates. The tools have made it easy for just about anyone to create spoofed explicit images of any individual, whether that’s a middle school classmate or a mega-celebrity.

    In March, a WIRED analysis found Google had received over 13,000 demands to remove links to a dozen of the most popular websites hosting explicit deepfakes. Google removed results in around 82 percent of the cases.

    As part of Google’s new crackdown, Higham says that the company will begin applying three of the measures to reduce discoverability of real but unwanted explicit images to those that are synthetic and unwanted. After Google honors a takedown request for a sexualized deepfake, it will then try to keep duplicates out of results. It will also filter explicit images from results in queries similar to those cited in the takedown request. And finally, websites subject to “a high volume” of successful takedown requests will face demotion in search results.

    “These efforts are designed to give people added peace of mind, especially if they’re concerned about similar content about them popping up in the future,” Higham wrote.

    Google has acknowledged that the measures don’t work perfectly, and former employees and victims’ advocates have said they could go much further. The search engine prominently warns people in the US looking for naked images of children that such content is unlawful. The warning’s effectiveness is unclear, but it’s a potential deterrent supported by advocates. Yet, despite laws against sharing NCEI, similar warnings don’t appear for searches seeking sexual deepfakes of adults. The Google spokesperson has confirmed that this will not change.

    [ad_2]

    Source link

  • OpenAI’s latest blunder shows the challenges facing Chinese AI models

    OpenAI’s latest blunder shows the challenges facing Chinese AI models

    [ad_1]

    In fact, among the few long Chinese tokens in GPT-4o that aren’t either pornography or gambling nonsense, two are “socialism with Chinese characteristics” and “People’s Republic of China.” The presence of these phrases suggests that a significant part of the training data actually is from Chinese state media writings, where formal, long expressions are extremely common.

    OpenAI has historically been very tight-lipped about the data it uses to train its models, and it probably will never tell us how much of its Chinese training database is state media and how much is spam. (OpenAI didn’t respond to MIT Technology Review’s detailed questions sent on Friday.)

    But it is not the only company struggling with this problem. People inside China who work in its AI industry agree there’s a lack of quality Chinese text data sets for training LLMs. One reason is that the Chinese internet used to be, and largely remains, divided up by big companies like Tencent and ByteDance. They own most of the social platforms and aren’t going to share their data with competitors or third parties to train LLMs. 

    In fact, this is also why search engines, including Google, kinda suck when it comes to searching in Chinese. Since WeChat content can only be searched on WeChat, and content on Douyin (the Chinese TikTok) can only be searched on Douyin, this data is not accessible to a third-party search engine, let alone an LLM. But these are the platforms where actual human conversations are happening, instead of some spam website that keeps trying to draw you into online gambling.

    The lack of quality training data is a much bigger problem than the failure to filter out the porn and general nonsense in GPT-4o’s token-training data. If there isn’t an existing data set, AI companies have to put in significant work to identify, source, and curate their own data sets and filter out inappropriate or biased content. 

    It doesn’t seem OpenAI did that, which in fairness makes some sense, given that people in China can’t use its AI models anyway. 

    Still, there are many people living outside China who want to use AI services in Chinese. And they deserve a product that works properly as much as speakers of any other language do. 

    How can we solve the problem of the lack of good Chinese LLM training data? Tell me your idea at [email protected].

    [ad_2]

    Source link

  • GPT-4o’s Chinese token-training data is polluted by spam and porn websites

    GPT-4o’s Chinese token-training data is polluted by spam and porn websites

    [ad_1]

    The new tokenizer has 200,000 tokens in total, and about 25% of the tokens are in non-English languages, says Deedy Das, an AI investor at Menlo Ventures. He used language filters to count the number of tokens in different languages, and the top languages, besides English, are Russian, Arabic, and Vietnamese.

    “So the tokenizer’s main impact, in my opinion, is you get the cost down in these languages, not that the quality in these languages goes dramatically up,” Das says. When an LLM has better and longer tokens in non-English languages, they can analyze the prompts faster and charge the users less for the same answer. With the new tokenizer, “you’re looking at almost four times cost reduction,” he says.

    Das, who also speaks Hindi and Bengali, took a look at the longest tokens in those languages. The tokens show a clear emphasis on respective dialogues happening in those languages, so they would include words like “Narendra” or “Pakistan.” But other than those, it looks similar to a list of common long words in English, like Prime Minister, university, and international. They also don’t exhibit the issue in Chinese tokens.

    That likely reflects the training data in those languages, Das says, “My working theory is the websites in Hindi and Bengali are very rudimentary. It’s like [mostly] news articles. So I would expect this to be the case. There are not many spam bots and porn websites trying to happen in these languages. It’s mostly going to be in English.”

    Polluted data and a lack of cleaning

    However, things are drastically different in Chinese. According to multiple researchers who have looked into the new library of tokens used for GPT-4o, the longest tokens in Chinese are almost exclusively spam words used in pornography, gambling, and scamming contexts. Even shorter tokens, like three-character-long Chinese words, also have a significant concentration on the same topics.

    “The problem is clear: the corpus used to train [the tokenizer] is not clean. The English tokens seem fine, but the Chinese ones are not,” says Cai from Princeton University. Crawling spam and including it in training data is not rare, but usually, there will be significant effort taken to clean up the data before it’s used. “It’s possible that they didn’t do proper data clearing when it comes to Chinese,” he says.

    The content of these Chinese tokens could suggest that they have been polluted by a specific phenomenon: websites hijacking unrelated content in Chinese or other languages to boost spam messages. 

    These messages are often advertisements of pornography videos and gambling websites. They could be real businesses or merely scams. And the language is inserted into content farm websites or sometimes legitimate websites so they can be indexed by search engines, circumvent the spam filters, and be found in random searches. For example, Google indexed one search result page on a US National Institute of Health website, which lists a porn site in Chinese. The same site name also appeared in at least five Chinese tokens in GPT-4o. 

    [ad_2]

    Source link

  • OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

    [ad_1]

    OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

    OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

    “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

    The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

    In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

    Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

    AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

    “Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

    Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

    As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

    Additional reporting by Reece Rogers

    [ad_2]

    Source link