Tag: facebook

  • Celebrity Deepfake Porn Cases Will Be Investigated by Meta Oversight Board

    Celebrity Deepfake Porn Cases Will Be Investigated by Meta Oversight Board

    [ad_1]

    As AI tools become increasingly sophisticated and accessible, so too has one of its worst applications: non-consensual deepfake pornography. While much of this content is hosted on dedicated sites, more and more it’s finding its way onto social platforms. Today, the Meta Oversight Board announced that it was taking on cases that could force the company to reckon with how it deals with deepfake porn.

    The board, which is an independent body that can issue both binding decisions and recommendations to Meta, will focus on two deepfake porn cases, both regarding celebrities who had their images altered to create explicit content. In one case about an unnamed American celebrity, deepfake porn depicting the celebrity was removed from Facebook after it had already been flagged elsewhere on the platform. The post was also added to Meta’s Media Matching Service Bank, an automated system that finds and removes images that have already been flagged as violating Meta’s policies, to keep it off the platform.

    In the other case, a deepfake image of an unnamed Indian celebrity remained up on Instagram, even after users reported it for violating Meta’s policies on pornography. The deepfake of the Indian celebrity was removed once the board took up the case, according to the announcement.

    In both cases, the images were removed for violating Meta’s policies on bullying and harassment, and did not fall under Meta’s policies on porn. Meta, however, prohibits “content that depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation” and does not allow porn or sexually explicit ads on its platforms. In a blog post released in tandem with the announcement of the cases, Meta said it removed the posts for violating the “derogatory sexualized photoshops or drawings” portion of its bullying and harassment policy, and that it also “determined that it violated [Meta’s] adult nudity and sexual activity policy.”

    The board hopes to use these cases to examine Meta’s policies and systems to detect and remove nonconsensual deepfake pornography, according to Julie Owono, an Oversight Board member. “I can tentatively already say that the main problem is probably detection,” she says. “Detection is not as perfect or at least is not as efficient as we would wish.”

    Meta has also long faced criticism for its approach to moderating content outside the US and Western Europe. For this case, the board already voiced concerns that the American celebrity and Indian celebrity received different treatment in response to their deepfakes appearing on the platform.

    “We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the United States and one from India, we want to see if Meta is protecting all women globally in a fair way,” says Oversight Board cochair Helle Thorning-Schmidt. “It’s critical that this matter is addressed, and the board looks forward to exploring whether Meta’s policies and enforcement practices are effective at addressing this problem.”

    [ad_2]

    Source link

  • Welcome to the Age of Technofeudalism

    Welcome to the Age of Technofeudalism

    [ad_1]

    The tech giants have overthrown capitalism. That’s the argument of former Greek finance minister Yanis Varoufakis, who became famous trying to defend debt-laden Greece from its German creditors. Varoufakis has never quite regained the notoriety of 2015. But he has remained a prominent left-wing voice. After a failed campaign for a seat in the European Parliament in 2019, he plans to run again this June. This time, his adversary isn’t Berlin or the banks. It’s the tech companies he accuses of warping the economy while turning people against one other.

    Technofeudalism What Killed Capitalism book cover

    Courtesy of Penguin Random House

    Varoufakis is also a prolific author; his 17th book, written as a letter to his techno-curious father, chronicles the evolution of capitalism from the 1960s advertising boom, through Wall Street in the 1980s, to the 2008 financial crisis and the pandemic. In its most compelling stretches, Technofeudalism argues that Apple, Facebook, and Amazon have changed the economy so much that it now resembles Europe’s medieval feudal system. The tech giants are the lords, while everyone else is a peasant, working their land for not much in return.

    To Varoufakis, every time you post on X, formerly Twitter, you’re essentially toiling Elon Musk’s estate like a medieval serf. Musk doesn’t pay you. But your free labor pays him, in a sense, by increasing the value of his company. On X, the more active users there are, the more people can be shown advertising or sold subscriptions. On Google Maps, he argues, users improve the product—alerting the system to traffic jams on their route.

    The feudal comparison isn’t novel. But Technofeudalism attempts to introduce the idea to a wider audience. Its US release, launched the month before regulators in the US and European Union simultaneously initiated antitrust actions against Apple, also had impeccable timing.

    Over Zoom, I spoke to Varoufakis, from his home near Athens, about how the tech giants have changed the economy—and why we should care about it.

    This interview has been edited for length and clarity.

    WIRED: That word, technofeudalism, what does it mean? How is the feudal system relevant here?

    Yanis Varoufakis: Profit drives capitalism, rent drove feudalism. Now we have moved [from one system to the other] because of this new form of super-duper, all-singing, all-dancing capital: cloud capital, algorithmic capital. If I’m right, that is creating new digital fiefdoms like Amazon.com, like Airbnb, where the main mode of wealth extraction comes in the form not of profit but of rent.

    Take the Apple Store. You are producing an app, Apple can withhold 30 percent of your profits [through a commission fee]. That’s a rent. That’s like a ground rent. It’s a bit like the Apple Store is a fiefdom. It’s a cloud fiefdom, and Apple extracts a rent exactly as in feudalism. So my argument is not that we went back from capitalism to feudalism. My argument is that we have progressed forward to a new system, which has many of the characteristics of feudalism, but it is one step ahead of capitalism. To signal that, I added the word techno.

    [ad_2]

    Source link

  • US Navy Veteran Who Feds Say Rammed FBI Headquarters Had QAnon-Linked Online Presence

    US Navy Veteran Who Feds Say Rammed FBI Headquarters Had QAnon-Linked Online Presence

    [ad_1]

    A former Navy submarine technician was arrested after law enforcement says he drove an SUV into the FBI headquarters near Atlanta on Monday afternoon. It is still unclear why the suspect, Ervin Lee Bolling, attempted to force entry to the headquarters, but research by Advance Democracy, a non-partisan, non-profit organization that conducts public-interest research, and shared exclusively with WIRED, has found that accounts believed to be associated with Bolling shared numerous conspiracy theories on social media platforms, including on X (formerly Twitter) and Facebook.

    Just after noon on Monday, Bolling rammed his burnt orange SUV with South Carolina license plates into the final barrier at FBI Atlanta’s headquarters, Matthew Upshaw, an FBI agent assigned to the Atlanta office wrote in a sworn affidavit on Tuesday. Upshaw added that after Bolling crashed the SUV, he left the car and tried to follow an FBI employee into the secure parking lot When agents instructed Bolling to sit on a curb, he refused and tried again to enter the premises. The affidavit also stated that Bolling resisted arrest when agents subsequently tried to detain him.

    Bolling was charged on Tuesday with destruction of government property, according to court records reviewed by WIRED.

    Advance Democracy researchers identified an account on X with the handle @alohatiger11, a reference to the Clemson University mascot which Bolling has expressed support for on his public Facebook page. The handle name is also similar to usernames on other platforms like Telegram and Cash App, which bear similarities to a Facebook page with Bolling’s name. The profile picture used in the X account also resembles a picture of the same man shown in Bolling’s public Facebook profile. The X account is currently set to private, but dozens of the account’s old posts are still publicly viewable through the Internet Archive.

    In December 2020, the X account wrote a response to a post about a federal government stimulus bill that stated, “Wonder what it will take for people to wake up.” The X account associated with Bolling responded, “I’m awake. Just looking for a good militia to join.”

    Around the same time, social media accounts seemingly associated with Bolling repeatedly boosted QAnon content and interacted with QAnon promoters, including posting a link to a now-deleted QAnon-associated channel on YouTube alongside the comment: “Release the Kraken’—in direct reference to Sidney Powell’s failed legal efforts to overturn the 2020 election results in Georgia.

    On what’s believed to be Bolling’s Facebook account, there were various posts related to anti-vaccine memes as well.

    The accounts also posted in support of former President Donald Trump. In December 2020, “I love you” was posted in response to a post on X from former President Donald Trump claiming falsely that the election was rigged by Democrats.

    Courtney Bolling, who is identified as the suspect’s wife on Facebook, did not respond to requests for comment via phone or messages sent to her social media profiles. No legal counsel is listed on record for Bolling.

    It is so far unclear how Bolling came to espouse these beliefs, but far-right groups and extremists have used social media platforms for decades as a way of spreading conspiracies and radicalizing new members. In recent years there have been numerous examples of far-right groups making online claims or threats which have been quickly followed by real world violence.

    [ad_2]

    Source link

  • Why Threads is suddenly popular in Taiwan

    Why Threads is suddenly popular in Taiwan

    [ad_1]

    Still, Threads’ popularity plummeted after its launch in July 2023. In Taiwan—like the rest of the world—many users left the platform after satisfying their initial curiosity. 

    But the 2024 Taiwanese presidential election gave it another chance. Wang, who studies social media in Taiwan, traced the platform’s second rise to November of last year, starting with the supporters of Taiwan’s Democratic Progressive Party (DPP), often associated with the color green. “Many (worried) pan-green supporters noticed that their complaints on politics were promoted to more readers on Threads than any other social media platforms (especially Facebook and Instagram), so more and more pan-green supporters gathered to Threads and used it as a mobilization tool,” he says.

    The election concluded in mid-January, with DPP candidate Lai Ching-te elected as Taiwan’s president. Many supporters of his party stayed on the platform. And as it became influential, other political figures also reactivated their Threads accounts and started posting regularly, trying to join the conversation. Everyday users who are less interested in politics came along too.

    On almost every day of the past three months, Threads has been the most downloaded social network app in both Apple’s and Android’s app stores in Taiwan, according to Sensor Tower, an app store intelligence firm. It surpassed both Western social platforms and those popular in China. 

    What does Taiwan Threads look like?

    Wang, who has been actively posting on Threads and accumulated over 3,000 followers, observes that there are two major demographics among Taiwan’s Threads users today: the pro-green voters, and younger students who are still in middle school and high school. “In recent weeks, there is a considerable amount of discussion on how to choose colleges, majors, and even high schools,” he says.

    Since Threads doesn’t have an official name in Chinese, Taiwanese users have tried to translate it in creative ways. Some stay close to the meaning and call it 串 or chuan, which means a string of beads or other objects (it could also mean a kebab skewer). Others call it 脆 or cui, which means crispy or fragile. It’s a transliteration attempt that many feel is too far-fetched, but since there’s no sound like “th” in Mandarin, it’s the best alternative, and it has already caught on among the users and surpassed other names. 

    What defines the content on Threads is a mix of political and lifestyle posts. On the one hand, some of the most influential accounts are Taiwanese politicians at all levels, including the presidential candidates. On the other, Threads users have embraced a type of content called 廢文—a cross between trash talk and light-stakes monologue. 

    As a result, to gain a following on Threads, the best practice is to mix up the serious and the unserious. One local representative candidate became unexpectedly famous when people discovered that his son was physically attractive. Joking about how this son’s virality has eclipsed his own, the politician now calls himself “The father of the son of Phoenix Cheng” on Threads, where he has over 268,000 followers.

    [ad_2]

    Source link

  • Antiabortion Disinformation Ads Ran Rampant on Facebook and Instagram

    Antiabortion Disinformation Ads Ran Rampant on Facebook and Instagram

    [ad_1]

    Ads containing abortion-related misinformation are allowed to run on Facebook and Instagram in countries across Asia, Africa, and Latin America, while legitimate health care providers struggle to get theirs approved, new research has found.

    The report, released today from the Center for Countering Digital Hate and MSI Reproductive Choices, an international reproductive health care provider, collected instances from across Vietnam, Nepal, Ghana, Mexico, Kenya, and Nigeria. Between 2019 and 2024 in Ghana and Mexico alone, researchers found 187 antiabortion ads on Meta’s platforms that were viewed up to 8.8 million times.

    Many of these ads were placed by foreign antiabortion groups. Americans United for Life, a US-based nonprofit whose website claims that abortion pills are “unsafe and unjust,” and Tree of Life Ministries, an evangelical church now headquartered in Israel, were both linked to the ads. Researchers also found that ads placed by groups not “originating in the country where the ad was served were viewed up to 4.2 million times.”

    In the report, researchers found that some of the ads linked out to websites like Americans United for Life, whose website describes abortion as a “business” that is “unsafe” for women. The abortion pill is widely considered safe and is less likely to cause death than both penicillin and Viagra. Other ads, like one run by the Mexican group Context.co, linked to a Substack dedicated to the topic that implied there is a secret global strategy to manipulate the Mexican populace and impose abortion on the country.

    One ad identified in Mexico alleged that abortion services were “financed from abroad … to eliminate the Mexican population.” Another warned that women could suffer “severe complications” from using the abortion pill.

    Meta spokesperson Ryan Daniels told WIRED that the company allows “posts and ads promoting health care services, as well as discussion and debate around them,” but that content about reproductive health “must follow our rules,” including only allowing reproductive health advertisements to target people above the age of 18.

    “This is money that Meta is taking to spread lies, conspiracy theories, and disinformation,” says Imran Khan, CEO of the Center for Countering Digital Hate.

    In these countries, where Meta often has partnerships with local telecom companies that allow users to access its platforms for free, Facebook is a key source of information. Some of these ads also ran on Instagram. “Anybody with a cell phone can access information. People use it to find services. When we ask clients, how did you hear about us? a lot of them will cite Facebook, because they live on Facebook. It’s where they know to search for information,” says Whitney Chinogwenya, marketing manager at MSI Reproductive Choices. So when disinformation runs rampant on the platform, the impact can be widespread.

    “Good health information saves lives. By actively aiding the spread of disinformation and suppressing good information,” Khan says, “[Meta is] literally putting lives at risk in those countries and showing that they treat foreign lives as substantially less important to them than American lives.”

    [ad_2]

    Source link

  • Meta Kills a Crucial Transparency Tool At the Worst Possible Time

    Meta Kills a Crucial Transparency Tool At the Worst Possible Time

    [ad_1]

    Earlier this month, Meta announced that it would be shutting down CrowdTangle, the social media monitoring and transparency tool that has allowed journalists and researchers to track the spread of mis- and disinformation. It will cease to function on August 14, 2024—just months before the US presidential election.

    Meta’s move is just the latest example of a tech company rolling back transparency and security measures as the world enters the biggest global election year in history. The company says it is replacing CrowdTangle with a new Content Library API, which will require researchers and nonprofits to apply for access to the company’s data. But the Mozilla Foundation and 140 other civil society organizations protested last week that the new offering lacks much of CrowdTangle’s functionality, asking the company to keep the original tool operating until January 2025.

    Meta spokesperson Andy Stone countered in posts on X that the groups’ claims “are just wrong,” saying the new Content Library will contain “more comprehensive data than CrowdTangle” and be made available to nonprofits, academics, and election integrity experts. But Meta did not respond to questions about why commercial newsrooms, like WIRED, are to be excluded.

    Brandon Silverman, cofounder and former CEO of CrowdTangle, who continued to work on the tool after Facebook acquired it in 2016, says it’s time to force platforms to open up their data to outsiders. The conversation has been edited for length and clarity.

    Vittoria Elliott: CrowdTangle has been incredibly important for journalists and researchers trying to hold tech companies accountable for the spread of mis- and disinformation. But it belongs to Meta. Could you talk a little bit about that tension?

    Brandon Silverman: I think there’s a bit too much of a public narrative that frustration with [New York Times columnist] Kevin Roose’ tweets is why they turned their back on CrowdTangle. I think the truth is that Facebook is moving out of news entirely.

    When CrowdTangle joined Facebook, they were all in on news and bought us to help the news industry. Fast forward three years later, they are like, “We’re done with that project.” There is a lot of responsibility that comes with hosting news on a platform, especially if you exist in essentially every community on Earth. I think that they made a calculus at some point that it just wasn’t worth what it would cost to do responsibly.

    My takeaway when I left was that if you want to do this work in a way that really serves civil society in the way we need it to, you can’t do it inside the companies—and Meta was doing more than almost anyone else. It’s abundantly clear that we need our regulators and elected officials to decide what we, as a society, want and expect from these platforms and to make those [demands] legally required.

    What would that look like?

    I think we’re at the very beginning of an entire ecosystem of better tools doing this work. The European Union’s sweeping Digital Services Act has a bunch of transparency requirements around data sharing. One of those they sometimes call the CrowdTangle provision—it requires qualifying platforms to provide real-time access to public data.

    Over a dozen platforms now have new programs that allow outside researchers to get access to real-time public content. Alibaba, TikTok, YouTube—which has been a black box forever—are now spinning up these programs. It’s been very quiet, because they don’t necessarily want a ton of people using them. In some cases companies add these programs to their terms of service but don’t make any public announcement.



    [ad_2]

    Source link

  • Today’s Supreme Court Hearing Addresses a Far-Right Boogeyman

    Today’s Supreme Court Hearing Addresses a Far-Right Boogeyman

    [ad_1]

    Today, the Supreme Court will hear a case that will determine whether the government can communicate with social media companies to flag misleading or harmful content to social platforms—or talk to them at all. And a lot of the case revolves around Covid-19 conspiracy theories.

    In Murthy v. Missouri, Attorney Generals from Louisiana and Missouri, as well as several other individual plaintiffs, argue that government agencies, including the CDC and CISA, have coerced social media platforms to censor speech related to Covid-19, election misinformation, and the Hunter Biden laptop conspiracy, among others.

    In a statement released in May 2022, when the case was first filed, Missouri Attorney General Eric Schmitt alleged that members of the Biden administration “colluded with social media companies like Meta, Twitter, and Youtube to remove truthful information related to the lab-leak theory, the efficacy of masks, election integrity, and more.” (The lab-leak theory has largely been debunked, and most evidence points to Covid-19 originating from animals.)

    While the government shouldn’t necessarily be putting its thumb on the scale of free speech, there are areas where government agencies do have access to important information that can—and should—help platforms make moderation decisions, says David Greene, civil liberties director at the Electronic Frontier Foundation (EFF), a nonprofit digital rights organization. The foundation filed an amicus brief on the case. “The CDC should be able to inform platforms, when it thinks there is really hazardous public health information placed on those platforms,” he says. “The question they need to be thinking about is, how do we inform without coercing them?”

    At the heart of the Murthy v. Missouri case is that question of coercion versus communication, or whether any communication from the government at all is a form of coercion, or “jawboning.” The outcome of the case could radically impact how platforms moderate their content, and what kind of input or information they can use to do so—which could also have a big impact on the proliferation of conspiracy theories online.

    In July 2023, a Louisiana federal judge consolidated the initial Missouri v. Biden case together with another case, Robert F. Kennedy Jr., Children’s Health Defense, et al v. Biden, to form the Murthy v. Missouri case. The judge also issued an injunction that barred the government from communicating with platforms. The injunction was later modified by the 5th Circuit Court of Appeals, which carved out some exceptions, particularly when it came to third parties such as the Stanford Internet Observatory, a research lab at Stanford that studies the internet and social platforms, flagging content to platforms.

    Children’s Health Defense (CHD), an anti-vaccine nonprofit, was formerly chaired by now presidential candidate, Robert F. Kennedy, Jr. The group was banned from Meta’s platforms in 2022 for spreading health misinformation, like that the tetanus vaccine causes infertility (it does not), in violation of the company’s policies. A spokesperson for CHD referred WIRED to a press release, with at statement from the organization’s president, Mary Holland, saying “As CHD’s chairman on leave, Robert F. Kennedy Jr. points out, our Founding Fathers put the right to free expression in the First Amendment because all the other rights depend on it. In his words, ‘A government that has the power to silence its critics has license for any kind of atrocity.’”

    [ad_2]

    Source link

  • Let’s Not Make the Same Mistakes with AI That We Made With Social Media

    Let’s Not Make the Same Mistakes with AI That We Made With Social Media

    [ad_1]

    The reason for these outcomes is structural. The network effects of tech platforms push a few firms to become dominant, and lock-in ensures their continued dominance. The incentives in the tech sector are so spectacularly, blindingly powerful that they have enabled six megacorporations (Amazon, Apple, Google, Facebook parent Meta, Microsoft, and Nvidia) to command a trillion dollars each of market value—or more. These firms use their wealth to block any meaningful legislation that would curtail their power. And they sometimes collude with each other to grow yet fatter.

    This cycle is clearly starting to repeat itself in AI. Look no further than the industry poster child OpenAI, whose leading offering, ChatGPT, continues to set marks for uptake and usage. Within a year of the product’s launch, OpenAI’s valuation had skyrocketed to about $90 billion.

    OpenAI once seemed like an “open” alternative to the megacorps—a common carrier for AI services with a socially oriented nonprofit mission. But the Sam Altman firing-and-rehiring debacle at the end of 2023, and Microsoft’s central role in restoring Altman to the CEO seat, simply illustrated how venture funding from the familiar ranks of the tech elite pervades and controls corporate AI. In January 2024, OpenAI took a big step toward monetization of this user base by introducing its GPT Store, wherein one OpenAI customer can charge another for the use of its custom versions of OpenAI software; OpenAI, of course, collects revenue from both parties. This sets in motion the very cycle Doctorow warns about.

    In the middle of this spiral of exploitation, little or no regard is paid to externalities visited upon the greater public—people who aren’t even using the platforms. Even after society has wrestled with their ill effects for years, the monopolistic social networks have virtually no incentive to control their products’ environmental impact, tendency to spread misinformation, or pernicious effects on mental health. And the government has applied virtually no regulation toward those ends.

    Likewise, few or no guardrails are in place to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, simulated public opinions supercharged by chatbots, fake videos in political ads—all of it persists in a legal gray area. Even clear violators of campaign advertising law might, some think, be let off the hook if they simply do it with AI. 

    Mitigating the risks

    The risks that AI poses to society are strikingly familiar, but there is one big difference: it’s not too late. This time, we know it’s all coming. Fresh off our experience with the harms wrought by social media, we have all the warning we should need to avoid the same mistakes.

    The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

    We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

    [ad_2]

    Source link

  • The First Rule of the Extreme Dishwasher Loading Facebook Group Is …

    The First Rule of the Extreme Dishwasher Loading Facebook Group Is …

    [ad_1]

    “On Extreme Pedantry, there was some discussion about the proper way to load a dishwasher, and I said, ‘Right, I need to create a similar group called Extreme Dishwasher Loading.’ I just went and created the group, and a load of people joined it at that point in 2016,” Hegedus, who lives in Essex, UK, tells WIRED.

    Hegedus, known to the group as Dear Loader, says that in the first years there were just a few thousand members. Then, like many such Facebook groups, the Covid-19 pandemic in 2020 saw the number of people joining and sharing images of their dishwasher loading skills—or lack of skills—explode. The group added tens of thousands of new members.

    While dishwasher loading techniques are taken extremely seriously, the group’s tone is overwhelmingly welcoming rather than mocking, and new members are encouraged to share pictures of their dishwashers as soon as they join.

    The group’s dozen or so administrators make sure that arguments never escalate, they never get personal, and anyone who becomes aggressive or abusive is instantly blocked. The result is a corner of the internet which is simultaneously friendly and gently mocking at the same time.

    The majority of members are based in the UK, but there is also a large US contingent, though Hegedus says there are no regional quirks when it comes to dishwasher loading.

    Hegedus adds that the reasons people join are very different, but one member who spoke to WIRED says it was the sense of community that pushed her to become part of the group.

    “I joined because I was thrilled to find other people as enthusiastic as me with the dishwasher,” Laura Marsh from Somerset, UK, tells WIRED. “I hate—really hate—washing by hand, and my other half never stacks it right. How much can you fit in a dishwasher and still have everything come out clean? There’s definitely an art to it.”

    Despite finding her people, Marsh also ran afoul of the rules set out by the admins when she posted a picture in response to a question about the strangest thing she’s put in her dishwasher. “My answer was ‘a toilet seat.’ Not my usual thing to put in there, just seemed like a good idea at the time. That was a big no-no. You’re not to mention toilet seats in the moist box. I considered my wrists slapped.”

    But the key to the group’s success, Hegedus says, is not that it provides a sure-fire way to load your dishwasher properly—it’s the double entendres.

    “It ended up being a great place for innuendo,” Hegedus says.

    Group members these days appear to be less interested in posting the picture of the perfect cutlery tray or the properly tessellated dishes, but seeing who can cram as much word play into their comments as possible.

    Posts and comments in the group are filled with terms like “moist box” (a reference to the dishwasher), “filthy load” (a reference to the contents of the dishwasher), and “hand job” (a reference to washing by hand).

    Take for example this recent comment to a question about feuds within the group: “We filleth the salty hole until it overflows with His abundant love,” the poster wrote. “We praise the burgeoning racks and the open flaps that The Dear Loader has generously bestowed upon us. With ecstatic fervor we plunge the largest and filthiest loads that we possibly can into our hot, moist boxes.”

    In the end, the Extreme Dishwasher Loading group has achieved a huge level of popularity not because of the advice it dishes out, but because it never takes itself too seriously.

    “It’s a place to get away from everything else, because at the end of the day it is so inane and unimportant,” Dear Loader says.

    [ad_2]

    Source link

  • No, ‘Leave the World Behind’ and ‘Civil War’ Aren’t Happening Before Your Eyes

    No, ‘Leave the World Behind’ and ‘Civil War’ Aren’t Happening Before Your Eyes

    [ad_1]

    Several people are typing, and they’re all saying Netflix’s Leave the World Behind is wildly prescient. The movie, directed by Sam Esmail, opens on a world where communication has been knocked out following a cyberattack. And earlier this week, when nearly all of Meta’s platforms—Facebook, Instagram, Threads—went down, people took to (other) social media platforms to post and hand-wring about the apocalypse.

    Most of the posts, per usual, were jokes: wry observations to help soothe the agita that comes with being alive when everything feels unstable. “Another dry run for Leave the World Behind,” wrote one X user. “I fear we are moving close to a Leave the World Behind scenario,” wrote another. “These tech glitches are increasingly [sic] with regularity.”

    But there was also a more conspiratorial undercurrent. For those who don’t know, Leave the World Behind was produced by Barack and Michelle Obama through their company Higher Ground Productions. Ever since the movie’s release, a conspiracy theory has persisted online that the film is somehow a warning about the widespread disorder to come.

    This same thread emerged late last month when an AT&T network outage wreaked havoc on US cellular networks. “The predictive programming of the Obama’s [sic] movie, Leave the World Behind, is becoming a little too real right now,” one user wrote on X. “I wouldn’t put it past our own federal government to institute a terrorist or cyber attack, just to blame it on foreign countries like China and Russia.”

    Odds are that nothing of the sort happened. Leave the World Behind is based on a 2020 book by Rumaan Alam and, according to the film’s director Sam Esmail, the former US president came on as a production partner only after the script was pretty much done. “I would just say [the conspiracy theorists] are pretty wrong in terms of his signaling,” he told Collider. “It had nothing to do with that.”

    Not that facts have ever gotten in the way of an online conspiracy before. Case in point, this week’s big trailer drop: Civil War. When the first trailer for Alex Garland’s next film dropped in December, online right-wing pundits speculated that it was also predictive programming, something meant to prepare the populace for events already planned by those in power. When the new trailer dropped this week, people on Reddit and elsewhere seemed to be fretting that the film will become, as The Hollywood Reporter put it, “MAGA fantasy fuel.”

    Ultimately, reactions like these to Leave the World Behind and Civil War merely serve as proof that they’re effective as works of fiction. They’re not part of some psyop to placate the public—they’re reactions to a political era that is fraught at best. Comfort is not a prerequisite for good filmmaking; movies are supposed to be unsettling sometimes. Concerns about a movie being too real are just signs that the filmmakers have tapped in to the collective psyche. Rather than think that Esmail or Garland—or Obama, for that matter—are trying to send some warning, perhaps consider the circumstances for why you’re worried that they might.



    [ad_2]

    Source link