Tag: facebook

  • Germany’s Far-Right Party Is Running Hateful Ads on Facebook and Instagram

    Germany’s Far-Right Party Is Running Hateful Ads on Facebook and Instagram

    [ad_1]

    Earlier this month, a German court ruled that the country’s nationalist far-right party, Alternative for Germany (AfD), was potentially “extremist” and could warrant surveillance by the country’s intelligence apparatus.

    Campaign ads placed by AfD have been allowed to appear on Facebook and Instagram anyway, according to a new report from the nonprofit advocacy organization Ekō shared exclusively with WIRED. Researchers found 23 ads that accrued 472,000 views from the party on Facebook and Instagram that appear to violate Meta’s own policies around hate speech.

    The ads push the narrative that immigrants are dangerous and a burden on the German state ahead of the European Union’s elections in June.

    One ad placed by AfD politician Gereon Bollman asserts that Germany has seen “an explosion of sexual violence” since 2015, specifically blaming immigrants from Turkey, Syria, Afghanistan, and Iraq. The ad was seen by between 10,000 and 15,000 people in just four days, between March 16 and 20, 2024. Another ad, which had over 60,000 views, features a man of color lying in a hammock. Overlaid text reads, “AfD reveals: 686,000 illegal foreigners live at our expense!”

    Ekō was also able to identify at least three ads that appear to have used generative AI to manipulate images, though only one was run after Meta put its manipulated media policy into place. One shows a white woman with visible injuries, with accompanying text saying “the connection between migration and crime has been denied for years.”

    “Meta, and indeed other companies, have very limited ability to detect third party tools that generate AI imagery,” says Vicky Wyatt, senior campaign director at Ekō. “When extremist parties use those tools with their ads, they can create incredibly emotive imagery that can really move people. So it’s incredibly worrying.”

    In its submission to the European Commission’s consultation on election guidelines, obtained by a freedom of information request made by Ekō, Meta says “it is not yet possible for providers to identify all AI-generated content, particularly when actors take steps to seek to avoid detection, including by removing invisible markers.”

    Meta’s own policies prohibit ads that “claim people are threats to the safety, health, or survival of others based on their personal characteristics” and ads that “include generalizations that state inferiority, other statements of inferiority, expressions of contempt, expressions of dismissal, expressions of disgust, or cursing based on immigration status.”

    “We do not allow hate speech on our platforms and have Community Standards that apply to all content – including ads,” says Meta spokesperson Daniel Roberts. “Our ads review process has several layers of analysis and detection, both before and after an ad goes live, and this system is one of many we have in place to protect European elections.” Roberts told WIRED the company plans to review the ads flagged by Ekō but didn’t respond to questions about whether the German court’s designation of the AfD as potentially extremist would invite further scrutiny from Meta.

    Targeted ads, says Wyatt, can be powerful because extremist groups can more effectively target people that might sympathize with their views and “use Meta’s ads library to reach them.” Wyatt also says this allows the group to test which messages are more likely to resonate with voters.

    [ad_2]

    Source link

  • A Far-Right Indian News Site Posts Racist Conspiracies. US Tech Companies Keep Platforming It

    A Far-Right Indian News Site Posts Racist Conspiracies. US Tech Companies Keep Platforming It

    [ad_1]

    “The goal is to amplify this disinformation, and you have BJP leaders sharing this, so people think it’s authentic,” says Naik. “In the long term, this kind of builds the case against a critic, a journalist, that this person is bad, because there is reporting against them.”

    When WIRED contacted OpIndia for comment, Sharma responded to our emailed questions by posting her responses on X.

    When asked about hate speech and disinformation on her site, Sharma wrote: “Our critics are mostly Islamists, Jihadis, Terrorists, Leftists and their sympathizers—like yourself. We don’t particularly care about any of them.” She then added that “Islamophobia does not exist” and pointed to an OpIndia article that outlines her position. Sharma added that it was “none of your concern” when asked if OpIndia was funded by the BJP. Sharma’s post also tagged one of the authors of this story, who then faced a torrent of abuse from Sharma’s followers.

    For years, activists and researchers have tried to highlight the problematic content published by OpIndia. A 2020 campaign from UK-based advocacy group Stop Funding Hate led to a number of advertisers removing their ads from the site. Google, however, says the content published on the site does not appear to breach its own rules.

    “All sites in our network, including Opindia, must adhere to our publisher policies, which explicitly prohibit ads from appearing alongside content promoting hate speech, violence, or demonstrably false claims that could undermine trust or participation in an election,” Google spokesperson Michael Aciman says. “Publishers are also subject to regular reviews, and we actively block or remove ads from any violating content.”

    Despite this, users can find ads for Temu or the Palm Beach Post next to many OpIndia articles promoting conspiracies and Islamophobia, placed with the help of ad-exchange platforms like Google’s Ad Manager, which is the market leader.

    Facebook, meanwhile, says Wiley, is more of a “walled garden.” Once a publisher meets the company’s criteria for monetization, including having more than 1,000 followers, it can earn money from ads that run on the page.

    While researchers that spoke to WIRED were unable to tell exactly how much the site has made from Google Ads and Facebook monetization, they said it’s likely that OpIndia is not solely reliant on the ad exchange for its revenue. It appears that, as with many news outlets in India, part of that funding comes in the form of more traditional advertising from a major client: the government.

    “A large section of India’s mainstream press depends on the government ads for their survival,” says Prashanth Bhat, professor of media studies at the University of Houston. “That revenue is critical for the mainstream media survival in a hypercompetitive media environment like in India. We have about 400 round-the-clock television news channels in India in different languages, and we have over 10,000 registered newspapers. For them to survive, they definitely need government patronage.”

    Sharma confirmed that OpIndia is reliant in part on ads from the government. “Literally every media house gets advertising from various political parties,” said Sharma. “In fact, a part of your salary could also be funded by such parties and/or their sympathizers. Do get down from your high horse.”

    [ad_2]

    Source link

  • Meta Faces Fresh Probe Over ‘Addictive’ Effect on Kids

    Meta Faces Fresh Probe Over ‘Addictive’ Effect on Kids

    [ad_1]

    The European Union has opened an investigation into Facebook and Instagram for the platforms’ potentially addictive effects on children, echoing two similar probes opened into TikTok earlier this year.

    Meta-owned platforms will be investigated for their addictive and “rabbit hole” effects, and whether young users were being fed too much content about depression or unrealistic body images. Investigators will also probe whether underage children—below 13 years old—are being effectively blocked from using the services.

    “We are not convinced that Meta has done enough to comply with the DSA [Digital Services Act] obligations—to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram,” Thierry Breton, the EU’s internal markets commissioner who is leading the investigations, said on X.

    “We want young people to have safe, age-appropriate experiences online,” said Meta spokesperson Kirstin MacLeod, adding the company has developed more than 50 tools and policies designed to protect young people. “This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”

    The investigations into Meta and TikTok under the bloc’s new Digital Services Act rules were separate, a Commission spokesperson said, adding that similarities between the cases simply reflected resemblances in how the platforms work. “There are some competitive effects in the markets where some platforms copy other platforms’ features,” they said.

    The effects of social media on children has sparked intense debate in recent months, following the publication of the book The Anxious Generation by Jonathan Haidt. The NYU social psychologist argues that the prevalence of social media use among young people is rewiring children’s brains and making them more anxious. In October, a coalition of US states sued Meta, alleging the company’s products are harmful to children’s mental health.

    The Digital Services Act is an expansive rulebook that aims to protect Europeans’ human rights online and took effect for the largest platforms in August last year. So far, the EU has investigations open into six platforms for different reasons: AliExpress, Facebook, Instagram, TikTok, TikTok Lite, and X. Under the Digital Services Act, platforms can be fined up to 6 percent of their global revenue.

    After the EU launched an investigation into a points-for-views reward system on TikTok Lite—a version of the app which uses less data—the company said it would suspend the incentive following concerns about its impact on children.

    “Our children are not guinea pigs for social media,” Breton said at the time.



    [ad_2]

    Source link

  • These Dangerous Scammers Don’t Even Bother to Hide Their Crimes

    These Dangerous Scammers Don’t Even Bother to Hide Their Crimes

    [ad_1]

    In a series of posts in one Telegram channel, highlighted by Warner, who is also involved in Intelligence for Good, one cybercriminal can be seen walking others through how to run a sextortion scam. They say they tricked people into sharing nude images—posting screenshots of the conversation—and explained ways other people can replicate it. “Hey I am posting your naked pictures on social media and Facebook,” says a sample message cybercriminals could use. “Am not just posting it am sending copies of it to your area,” the message says, before demanding $700.

    While the scripts like these are shared on all social media channels, WIRED found at least 80 on the document-sharing service Scribd. The company removed them after WIRED got in touch, with a spokesperson saying there are limits on what people can upload and that the company has automated and manual reviews to remove content. “We’re actively building out new capabilities to broaden the scope of content moderation coverage to include a wider range of concerning text and image violations,” the spokesperson says. Some of the scripts had been online since 2020, and on pages where they were removed a “reading suggestions” section recommended other scam scripts.

    Raffile says the Yahoo Boys have been able to “thrive” online “due to lack of moderation around all the illicit material” that they’re sharing. “They’re acting with impunity because they feel they will never get caught,” Raffile says.

    Beyond the messaging platforms, the Yahoo Boys have a presence on TikTok and YouTube. “We design our app to be inhospitable to those who seek to exploit our community and we’ve removed this content for violating our policies,” a TikTok spokesperson says.

    “Our policies prohibit spam, scams, or other deceptive practices that take advantage of the YouTube community,” a YouTube spokesperson says. “We also prohibit videos that encourage illegal or dangerous activities. As such, we have terminated the flagged channels for violating our policies and our terms of service.” They add that the company removed accounts for breaching policies about harmful content, spam, and generally violating its terms of service.

    The accounts posted tutorials about how to scam people, link to groups on messaging apps, and promote technology for fake video calls. On TikTok, multiple accounts include carousels of images that the scammers can use in their efforts to create believable personas. Some of these include posts of elderly women for scammers who are in “need of grandma pictures for proof” of their fake identities and others for scammers who “need kids pics” for their victims.

    As well as being a threat to thousands of people around the world, the Yahoo Boys can be quick to adopt new technologies. David Maimon, a professor at Georgia State University and the head of fraud insights at the identity-verification firm SentiLink, has monitored Yahoo Boys for years and says their techniques have evolved alongside new technologies.

    “To build rapport with victims, the fraudsters first used text messages, then started sending recorded audio messages, to now using deepfake tools to communicate with victims live,” Maimon says. “On some of the markets we now also see the use of cloned voices. It is now accompanied with sending physical items to victims such as presents, food deliveries, and flowers.” Within some groups, they use “nudification” tools to turn photos of people clothed into nude photos, and deepfake video calls.

    While the Yahoo Boys have been active for years, all the experts spoken to for this piece say they should be treated more seriously by social media companies and law enforcement. “It’s time that we start looking at Yahoo Boys as a dangerous organization, transnational organized crime, and start giving it some of those labels,” Raffile says.

    [ad_2]

    Source link

  • How Far-Right, Extremist Militias Organize On Facebook

    How Far-Right, Extremist Militias Organize On Facebook

    [ad_1]

    In the aftermath of the Capitol riot, far-right militia groups are using Facebook to organize—and they’re not worried about getting banned by Meta.

    [ad_2]

    Source link

  • Extremist Militias Are Coordinating in More Than 100 Facebook Groups

    Extremist Militias Are Coordinating in More Than 100 Facebook Groups

    [ad_1]

    “Join Your Local Militia or III% Patriot Group,” a post urged the more than 650 members of a Facebook group called the Free American Army. Accompanied by the logo for the Three Percenters militia network and an image of a man in tactical gear holding a long rifle, the post continues: “Now more than ever. Support the American militia page.”

    Other content and messaging in the group is similar. And despite the fact that Facebook bans paramilitary organizing and deemed the Three Percenters an “armed militia group” on its 2021 Dangerous Individuals and Organizations List, the post and group remained up until WIRED contacted Meta for comment about its existence.

    Free American Army is just one of around 200 similar Facebook groups and profiles, most of which are still live, that anti-government and far-right extremists are using to coordinate local militia activity around the country.

    After lying low for several years in the aftermath of the US Capitol riot on January 6, militia extremists have been quietly reorganizing, ramping up recruitment and rhetoric on Facebook—with apparently little concern that Meta will enforce its ban against them, according to new research by the Tech Transparency Project, shared exclusively with WIRED.

    Individuals across the US with long-standing ties to militia groups are creating networks of Facebook pages, urging others to recruit “active patriots” and attend meetups, and openly associating themselves with known militia-related sub-ideologies like that of the anti-government Three Percenter movement. They’re also advertising combat training and telling their followers to be “prepared” for whatever lies ahead. These groups are trying to facilitate local organizing, state by state and county by county. Their goals are vague, but many of their posts convey a general sense of urgency about the need to prepare for “war” or to “stand up” against many supposed enemies, including drag queens, immigrants, pro-Palestine college students, communists—and the US government.

    These groups are also rebuilding at a moment when anti-government rhetoric has continued to surge in mainstream political discourse ahead of a contentious, high-stakes presidential election. And by doing all of this on Facebook, they’re hoping to reach a broader pool of prospective recruits than they would on a comparatively fringe platform like Telegram.

    “Many of these groups are no longer fractured sets of localized militia but coalitions formed between multiple militia groups, many with Three Percenters at the helm,” said Katie Paul, director of the Tech Transparency Project. “Facebook remains the largest gathering place for extremists and militia movements to cast a wide net and funnel users to more private chats, including on the platform, where they can plan and coordinate with impunity.”

    Paul told WIRED that she’s been monitoring “hundreds” of militia-related groups and profiles since 2021 and has observed them growing “increasingly emboldened with more serious and coordinated organizing” in the past year.

    One particularly influential account in this Facebook ecosystem belongs to Rodney Huffman, leader of the Confederate States III%, an Arkansas-based militia that, in 2020, sought to rally extremists at Georgia’s Stone Mountain, a popular site for Confederate and white supremacist groups. Huffman has created a network of Facebook groups and spreads the word about local meetups. His partner, Dabbi Demere, is equally active and on a mission to recruit “active” patriots into the groups. Huffman and Demere are also key players in the pro-Confederate movement known as “Heritage, not Hate.”

    [ad_2]

    Source link

  • How Sidechat Fanned the Flames of University Campus Protests

    How Sidechat Fanned the Flames of University Campus Protests

    [ad_1]

    In the months following Hamas’ October 7 attack on Israel, conversation on college campuses has been defined by a palpable tension. Increased antisemitic and anti-Muslim rhetoric embroiled numerous universities in free speech debates. In late April, as the Israel-Hamas War moved into its fifth month, students at Columbia University and other institutions across the US began protesting, calling for a ceasefire. Amidst all of this, one platform has served as a locus: Sidechat, a social media app that’s become both a place for dialogue about the protests and a breeding ground for hate speech.

    Over the last few weeks, as demonstrations erupted at Columbia, NYU, Yale, Princeton, the University of Texas, and elsewhere, students took to the app to share memes and express dismay at their administrators’ responses.

    On April 22, following a weekend of arrests at Columbia, Colin Roedl, the editorial page editor at the student-run Columbia Daily Spectator told Slate students were seeing “calls for solidarity” on the app. The following day, some 3,000 Columbia staff, students, and community members signed a letter to university president Minouche Shafik, the board of trustees, and the school’s deans supporting “campus safety and academic freedom.” It included a link to a folder of Sidechat screenshots showing people asking how to join the encampments on campus and discussions of Zionism.

    On Tuesday, the New York Police Department arrested hundreds of protestors at Columbia and City College of New York.

    Prior to the protests, administrators at other colleges, like Harvard and Brown, had sought to increase moderation on Sidechat, citing increased reports of harassment and hate speech from students using the platform. Rhetoric on the app had become “dehumanizing, racist, homogenizing, (and) hateful,” says Aboud Ashhab, a Palestinian student at Brown. Andrew Rovinsky, a Jewish student at the university, calls it “a cesspool.”

    Because the app’s defining feature is student discourse done anonymously (users don’t post with their real names), toxic messages and demeaning language flow freely. “What you see on Sidechat is a bunch of people actually engaging in the most vile rhetoric you’ve seen, because it’s anonymous,” Rovinsky says.

    Launched in 2022 as a mechanism for college students to whisper about campus happenings, Sidechat quickly spread across US universities. Like the early version of Facebook, the app requires a university email address to log in, and while it initially served as a hub for gossip and collective complaining, university administrators began to take notice of more heated discussion on the platform in recent months and implored Sidechat to strengthen its content moderation.

    While the app’s user guidelines state that the platform does not allow content that “perpetuates the oppression of marginalized communities by promoting discrimination against (or hatred toward) certain groups of people,” both Sidechat and its predecessor Yik Yak have come under fire for facilitating an online environment that bodes well for hate speech.

    In fact, before Sidechat’s acquisition of Yik Yak in 2023, Yik Yak took a four-year hiatus after a bombardment of complaints regarding racism, discrimination, and threats of violence circulating on the app. Hateful comments in the months following the October 7 attack suggest Sidechat is not so different from its forerunner.



    [ad_2]

    Source link

  • A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

    A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

    [ad_1]

    A lawsuit filed Wednesday against Meta argues that US law requires the company to let people use unofficial add-ons to gain more control over their social feeds.

    It’s the latest in a series of disputes in which the company has tussled with researchers and developers over tools that give users extra privacy options or that collect research data. It could clear the way for researchers to release add-ons that aid research into how the algorithms on social platforms affect their users, and it could give people more control over the algorithms that shape their lives.

    The suit was filed by the Knight First Amendment Institute at Columbia University on behalf of researcher Ethan Zuckerman, an associate professor at the University of Massachusetts—Amherst. It attempts to take a federal law that has generally shielded social networks and use it as a tool forcing transparency.

    Section 230 of the Communications Decency Act is best known for allowing social media companies to evade legal liability for content on their platforms. Zuckerman’s suit argues that one of its subsections gives users the right to control how they access the internet, and the tools they use to do so.

    “Section 230 (c) (2) (b) is quite explicit about libraries, parents, and others having the ability to control obscene or other unwanted content on the internet,” says Zuckerman. “I actually think that anticipates having control over a social network like Facebook, having this ability to sort of say, ‘We want to be able to opt out of the algorithm.’”

    Zuckerman’s suit is aimed at preventing Facebook from blocking a new browser extension for Facebook that he is working on called Unfollow Everything 2.0. It would allow users to easily “unfollow” friends, groups, and pages on the service, meaning that updates from them no longer appear in the user’s newsfeed.

    Zuckerman says that this would provide users the power to tune or effectively disable Facebook’s engagement-driven feed. Users can technically do this without the tool, but only by unfollowing each friend, group, and page individually.

    There’s good reason to think Meta might make changes to Facebook to block Zuckerman’s tool after it is released. He says he won’t launch it without a ruling on his suit. In 2020, the company argued that the browser Friendly, which had let users search and reorder their Facebook news feeds as well as block ads and trackers, violated its terms of service and the Computer Fraud and Abuse Act. In 2021, Meta permanently banned Louis Barclay, a British developer who had created a tool called Unfollow Everything, which Zuckerman’s add-on is named after.

    “I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly,” Barclay wrote for Slate at the time. “But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically.”

    [ad_2]

    Source link

  • The Latest Online Culture War Is Humans vs. Algorithms

    The Latest Online Culture War Is Humans vs. Algorithms

    [ad_1]

    Brands and bots are barred from Spread, and, like PI.FYI, the platform doesn’t support ads. Instead of working to maximize time-on-site, Rogers’ primary metrics for success will be indicators of “meaningful” human engagement, like when someone clicks on another user’s recommendation and later takes action like signing up for a newsletter or subscription. He hopes this will align companies whose content is shared on Spread with the platform’s users. “I think there’s a nostalgia for what the original social meant to achieve,” Rogers says.

    So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.

    Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.

    Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you’ve got way too much information for anybody to consume, so you have to reduce it somehow,” he says.

    In January, Stray launched the Prosocial Ranking Challenge, a competition with a $60,000 prize fund aiming to spur development of feed-ranking algorithms that prioritize socially desirable outcomes, based on measures of users’ well-being and how informative a feed is. From June through October, five winning algorithms will be tested on Facebook, X, and Reddit using a browser extension.

    Until a viable replacement takes off, escaping engagement-seeking algorithms will generally mean going chronological. There’s evidence people are seeking that out beyond niche platforms like PI.FYI and Spread.

    Group messaging, for example, is commonly used to supplement artificially curated social media feeds. Private chats—threaded by the logic of the clock—can provide a more intimate, less chaotic space to share and discuss gleanings from the algorithmic realm: the trading of jokes, memes, links to videos and articles, and screenshots of social posts.

    Disdain for the algorithm could help explain the growing popularity of WhatsApp within the US, which has long been ubiquitous elsewhere. Meta’s messaging app saw a 9 percent increase in daily users in the US last year, according to data from Apptopia reported by The Wrap. Even inside today’s dominant social apps, activity is shifting from public feeds and toward direct messaging, according to Business Insider, where chronology rules.

    Group chats might be ad-free and relatively controlled social environments, but they come with their own biases. “If you look at sociology, we’ve seen a lot of research that shows that people naturally seek out things that don’t cause cognitive dissonance,” says Stoldt of Drake University.

    While providing a more organic means of compilation, group messaging can still produce echo chambers and other pitfalls associated with complex algorithms. And when the content in your group chat comes from each member’s respective highly personalized algorithmic feed, things can get even more complicated. Despite the flight to algorithm-free spaces, the fight for a perfect information feed is far from over.

    [ad_2]

    Source link

  • Ads for Explicit ‘AI Girlfriends’ Are Swarming Facebook and Instagram

    Ads for Explicit ‘AI Girlfriends’ Are Swarming Facebook and Instagram

    [ad_1]

    However, 3,000 ads for “AI girlfriends” and 1,100 containing “NSFW” were live on April 23, according to Meta’s ad library.

    WIRED’s initial review found that Hush, an AI girlfriend app downloaded more than 100,000 times from Google’s Play store, had published 1,700 ads across Meta platforms, several of which promise “NSFW” chats and “secret photos” from a range of lifelike female characters, anime women, and cartoon animals.

    One shows an AI woman locked into medieval prison stocks by the neck and wrists, pledging, “Help me, I will do anything for you.” Another ad, targeted using Meta’s technology at men aged 18 to 65, features an anime character and the text “Want to see more of NSFW pics?”

    Several of the 980 Meta ads WIRED found for “personalized AI companion” app Rosytalk promise around-the-clock chats with very-young-looking AI-generated women. They used tags including “#barelylegal,” “#goodgirls,” and “teens.” Rosytalk also ran 990 ads under at least nine brand names on Meta platforms, including Rosygirl, Rosy Role Play Chat, and AI Chat GPT.

    At least 13 other apps for AI “girlfriends” have promoted similar services in Meta ads, including “nudifying” features that allow a user to “undress” their AI girlfriend and download the images. A handful of the girlfriend ads had already been removed for violating Meta’s advertising standards. “Undressing” apps have also been marketed on mainstream social platforms, according to social media research firm Graphika, and on LinkedIn, the Daily Mail recently reported.

    Some users of so-called AI companions say they can help combat loneliness, with others reporting them feeling like a real partner. Not all of the ads found by WIRED promote only titillation, with some also suggesting that an explicit AI chatbot could provide emotional support. “Talk to anyone! You’re not alone!” reads one of Hush’s ads on Meta platforms.

    Carolina Are, an innovation fellow researching social media censorship at the Center for Digital Citizens at Northumbria University in the UK, says human sex workers feed the same needs and desires as racy AI girlfriend apps and also cater to lonely and disabled people. But Meta makes it extremely difficult for them to advertise on its platforms, she says.

    “When people are trying to work through and profit off their own body, they are forbidden,” says Are, who has helped sex workers reactivate lost and unfairly suspended accounts on Meta platforms. “While AI companies mostly powered by bros that exploit images already out there are able to do that.”

    Are says the sexually suggestive AI girlfriends remind her of the unsophisticated and generic early days of internet porn. “Sex workers engage with their customers, subscribers, and followers in a way that is more personalized,” she says. “This is a lot of work and emotional labor beyond the sharing of nude images.”

    Limited information is available about how the AI apps are built or the underlying text or image-generation algorithms trained. One used the name Sora, apparently to suggest a connection to OpenAI’s video generator of that name, which has not been publicly released.

    [ad_2]

    Source link