Tag: x

  • The Controversial Kids Online Safety Act Faces an Uncertain Future

    The Controversial Kids Online Safety Act Faces an Uncertain Future

    [ad_1]

    After passing the Senate nearly unanimously last week, the future of the Kids Online Safety Act (KOSA) appears uncertain. Congress is now on a six-week recess, and reporting from Punchbowl News indicates that the House Republican leadership may not prioritize bringing the bill to the floor for a vote when legislators return.

    In response to Punchbowl’s reporting, Senate Majority Leader Chuck Schumer released a statement saying, “Just one week ago, Speaker Johnson said that he’d like to get KOSA done. I hope that hasn’t changed. Letting KOSA and [the Children and Teens’ Online Protection Act] collect dust in the House would be an awful mistake and a gut punch—a gut punch to these brave, wonderful parents who have worked so hard to reach this point.” The bill has also received support from vice president and Democratic presidential candidate Kamala Harris.

    But the bill created a massive divide among the digital rights and tech accountability community. If passed, the legislation would require online platforms to block users under 18 from seeing certain types of content that the government considers harmful.

    Proponents of the measure, which included the Tech Oversight Project, an nonprofit focused on tech accountability through antitrust legislation, saw the bill as a meaningful step toward holding tech companies accountable for the way their products impact children.

    “Too many young people, parents, and families have experienced the dire consequences that result from social media companies’ greed,” said Sacha Haworth, executive director of the Tech Oversight Project, in a statement in June. “The accountability KOSA would provide for these families is long overdue.”

    Others, like the nonprofit digital rights organization the Center for Technology and Democracy, said that, if enacted, the law could be used to prevent young users from accessing critical information about topics like sexual health and LGBTQ+ issues. This meant that some organizations that regularly lobby to hold Silicon Valley accountable found themselves siding with tech companies and their lobbyists in trying to kill the bill.

    “KOSA is not ready for a floor vote,” said Aliya Bhatia, policy analyst with the Center for Technology and Democracy’s Free Expression Project, in a statement in July. “In its current form, KOSA can still be misused to target marginalized communities and politically sensitive information.”

    Evan Greer, director of the nonprofit advocacy group Fight for the Future, which opposed the bill, tells WIRED that KOSA and legislation like it “divides our coalition” while allowing tech companies to “keep getting away with murder and avoiding regulation.”

    “This was never really about protecting kids,” Greer says. “It was sort of about lawmakers wanting to say that they’re protecting kids, and that doesn’t actually help kids.” Instead of legislators focusing on the “flawed” legislation, Greer says that Congress could have spent that same time and energy on antitrust-focused legislation like the American Innovation and Choice Online and the Open App Markets Act, or on the American Privacy Rights Act.

    “When our coalition is divided in fighting each other, we’re going to get rolled every time by Big Tech,” she says.

    Meanwhile, Linda Yaccarino, CEO of X, has said that she supports KOSA, as has the Center for Countering Digital Hate, a tech accountability nonprofit that was sued by X last year for exposing hate speech on its platform.

    Although the House Republican leadership’s decision may signal the beginning of the end of KOSA itself, Gautam Hans, an associate law professor at Cornell University, says that “given the bipartisan interest in enacting this law, I suspect other proposals will follow—with hopefully more extensive safeguards against potential censorship by the state.”

    [ad_2]

    Source link

  • A Russian Propaganda Network Is Promoting an AI-Manipulated Biden Video

    A Russian Propaganda Network Is Promoting an AI-Manipulated Biden Video

    [ad_1]

    Among the prominent accounts sharing the video was Russian Market, which has 330,000 followers, and is operated by Swiss social media personality Vadim Loskutov, who is known for praising Russia and criticizing the West. The video was also shared by Tara Reade, defected to Russia in 2023 in a bit for citzenship. Reid alsoaccused Biden of sexually assaulting her in 1993.

    The video, researchers tell WIRED, was also manipulated in a bid to avoid detection online. “Doppelganger operators trimmed the video at arbitrary points, so they are technically different in milliseconds and therefore are likely considered as distinct unique videos by abuse protection systems,” the Antibot4Navalny researchers tell WIRED.

    “This one is unique in its ambiguity,” Fink said. “It’s maybe a known Russian band, but maybe not, maybe a deepfake, but maybe not, maybe has reference to other politicians but maybe not, In other words, it is a distinctly Soviet style of propaganda video. The ambiguity allows for multiple competing versions, which means hundreds or articles and arguments online, which leads to more people seeing it eventually.”

    As the Kremlin ramps up its efforts to undermine the US election in November, it is increasingly clear that Russia is willing to utilize emerging AI technologies. A new report published this week from threat intelligence company Recorded Future highlighted this trend by revealing that a campaign, which has been linked to the Kremlin, has been using generative AI tools to push pro-Trump content on a network of fake websites.

    The report details how the campaign, dubbed CopyCop, used the AI tools to scrape content from real news websites, repurpose the content with a right-wing bias, and republish the content on a network of fake websites with names like Red State Report and Patriotic Review that purport to be staffed by over a 1,000 journalists—all of whom are fake and have also been invented by AI.

    The topics pushed by the campaign include errors made by Biden during speeches, Biden’s age, poll results that show a lead for Trump, and claims that Trump’s recent criminal conviction and trial was “impactless” and “a total mess.”

    It is still unclear how much impact these sites are having, and a review by WIRED of social media platforms found very few links to the network of fake websites CopyCop has created. But what the CopyCop campaign has proved is that AI can supercharge the dissemination of disinformation. And experts say, this is likely just the first step in a broader strategy that will likely include networks like Doppelganger.

    “Estimating the engagement with the websites themselves remains a difficult task,” Clément Briens, an analyst at Recorded Future tells WIRED. “The AI-generated content is likely not garnering attention at all. However, it serves the purpose of helping establish these websites as credible assets for when they publish targeted content like deepfakes [which are] amplified by established Russian or pro-Russian influence actors with existing following and audiences.”

    [ad_2]

    Source link

  • No, Drake’s Cover of ‘Hey There Delilah’ Isn’t AI

    No, Drake’s Cover of ‘Hey There Delilah’ Isn’t AI

    [ad_1]

    As if he didn’t have enough to deal with amid his beef with Kendrick Lamar (or perhaps to distract from it), Drake showed up on a remix of parody rapper Snowd4y’s cover of Plain White T’s “Hey There Delilah,” called “Wah Gwan Delilah,” that has everyone … perplexed? Annoyed? Laughing?

    Let’s walk through this together, it’s a mess.

    On Monday, a fresh remix of “Wah Gwan Delilah” showed up on Snowd4y’s SoundCloud. It had what appeared to be Drake joining the comedian in a series of quips about women and name-checks of Toronto landmarks like the Yonge-Dundas Square mall. (“Wah gwan” is Jamaican patois for “What’s up?” and is common in the city, which has long had a sizable population of people of Caribbean descent.)

    As the track spread, it made its way to the Plain White T’s themselves, who posted a video on X and TikTok with the caption “too stunned to speak.” Frontman Tom Higgenson also says “it’s crazy that everybody thinks that it’s real,” seemingly referencing early rumors that Drake’s lyrics were generated using artificial intelligence. Higgenson also makes a series of faces that give off the appearance that he just smelled a fart.

    Those rumors, though, are likely untrue. Drake posted the song to his Instagram Story, seemingly confirming its authenticity.

    It’s easy to see, though, why everyone was confused. AI, as WIRED’s Evy Kwong pointed out on TikTok Thursday, has become so prevalent that it has caused people to question everything. When the song “Heart on My Sleeve” dropped, it took many people several listens to realize it wasn’t actually Drake and The Weeknd. Many fans probably never would’ve known The Beatles’ “Now and Then” wasn’t just a pristine long-lost tape if Paul McCartney hadn’t touted the AI needed to save it. Johnny Cash covers Taylor Swift from beyond the grave. Examples of AI’s ability to fool our ears feel truly endless.

    The Monitor is a weekly column devoted to everything happening in the WIRED world of culture, from movies to memes, TV to TikTok.

    With the realness of Drake’s presence on “Wah Gwan Delilah” seemingly confirmed, the floodgates opened. Rap Twitter, as Billboard noted, had a field day “with the main perception being that after losing the battle to Kendrick, Drake is now just losing it in general” and “leaning into his Toronto-ness” for some image repair.

    Likely, this will have the opposite effect. While Drake Reddit is screaming that it’s satire and if people don’t get it, “the joke is probably on you,” other swaths of the internet remain unable to keep a straight face—or at least a non-cringing one. Vulture, in its writeup of the remix, simply said “post-beef Drake cannot be serious.”



    [ad_2]

    Source link

  • Twitter Is Finally Dead. It’s X All the Way Down

    Twitter Is Finally Dead. It’s X All the Way Down

    [ad_1]

    Like a venomous puss moth emerging from its hard cocoon, the social network formerly known as Twitter has fully metamorphosed into X.com.

    Various elements of Twitter had already embraced the rebranding, and the company has been using X.com links since early April. But now the domain has flipped over entirely, marking the end of a tumultuous transition period—and erasing the last vestiges of the bird app.

    “We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same,” reads a message at the bottom of the X login and home pages.

    The switchover has been a long time coming. Musk announced the shift from Twitter to X last July. But the billionaire has for decades harbored a dream of creating an “everything app” by that name, and Twitter is his vessel.

    “The Twitter name made sense when it was just 140-character messages going back and forth—like birds tweeting—but now you can post almost anything, including several hours of video,” Musk wrote on the newly redubbed X last summer. “In the months to come, we will add comprehensive communications and the ability to conduct your entire financial world. The Twitter name does not make sense in that context, so we must bid adieu to the bird.”

    Twitter under Musk has indeed added video and voice calls to its roster of features. It has also replatformed conspiracy theorists like Alex Jones, fostered a welcoming environment for porn spam accounts, made an absolute hash out of verification, introduced a monetization system that encourages rampant engagement farming, gutted its trust and safety team, allowed a surge in hate speech on the platform, designated NPR as “US state-affiliated media,” removed news headlines entirely and then reintroduced them in a weird spot, kneecapped a bunch of fun bots and third-party apps by introducing wildly expensive API changes while giving blue-check verification to AI-generated chum, pivoted to video, introduced an AI model that will help you do crimes, and overseen a decline in usage of more than 20 percent in the US, according to app analytics firm SensorTower.

    The “entire financial world” part remains a work in progress.

    A sentimentalist may bemoan the death of Twitter, which for all its faults always had a capacity to delight and surprise. But remember that this transformation was inevitable. Musk first owned X.com in 1999, when he cofounded an online bank by that name; it would eventually merge with a competitor and become PayPal. He bought X.com back from PayPal in 2017, tweeting that it had “great sentimental value.” And he has seen Twitter as a vessel to create X on Earth since before the acquisition was even completed, according to Musk biographer Walter Isaacson.

    “In the days leading up to his takeover of Twitter at the end of October 2022, Musk’s moods fluctuated wildly,” Isaacson wrote in Elon Musk. “He said that he would turn it into the combination of financial platform and social network he had envisioned 24 years earlier for X.com, and he added that he planned to rebrand it with that name, which he loved.”

    To put an even finer point on it, Musk’s tweet today announcing that “all core systems are now on X.com” featured the logo of the company he founded 25 years ago.

    While X may never become the everything app of Musk’s dreams, it’s undeniably and indelibly a different place than the one he bought. Which in some ways makes this final transition all the more palatable. Whatever Elon Musk’s platform has become, it’s certainly not Twitter. Call it whatever you want.

    [ad_2]

    Source link

  • Threads is giving Taiwanese users a safe space to talk about politics

    Threads is giving Taiwanese users a safe space to talk about politics

    [ad_1]

    3. The US government is considering cutting the so-called de minimis exemption from import duties, which makes it cheap for Temu and Shein to send packages to the US. But lots of US companies also benefit from the exemption now. (The Information $)

    4. The Chinese commerce minister will visit Europe soon to plead his country’s case amid the European Commission’s investigation into Chinese electric vehicles. (Reuters $)

    5. After three years of unsuccessful competition with WhatsApp, ByteDance’s messaging app designed for the African market finally shut down last month. (Rest of World)

    6. The rapid progress of AI makes it seem less necessary to learn a foreign language. But there are still things AI loses in translation. (The Atlantic $)

    7. This is the incredible story of a Chinese man who takes his piano to play outdoors at places of public grief: in front of the covid quarantine barriers in Wuhan, at the epicenter of an earthquake, on a river that submerged villages. And he plays the same song—the only song he knows, composed by the Japanese composer Ryuichi Sakamoto. (NPR)

    Lost in translation

    With Netflix’s March release of The Three Body Problem, a series adapted from the global hit sci-fi novel by Chinese author Liu Cixin, Western audiences are also learning about a movie-like real-life drama behind the adaptation. In 2021, the Chinese publication Caixin first investigated the mysterious death of Lin Qi, a successful businessman who bought the movie rights to the book. In 2017, he hired Xu Yao, a prominent attorney, to work on legal affairs and government relations.

    In December 2020, Lin died after he was poisoned by a mysterious mix of toxins. According to Caixin, Xu is a fan of the TV series Breaking Bad and had his own plant in Shanghai where he made poisons. He would order hundreds of different toxins through the dark web, mix them, and use them on pets to experiment. A week before Lin’s death, Xu gave him a bottle of pills that were supposedly prebiotics, but he had replaced them with poison. 

    Xu was arrested soon after Lin died, and he was sentenced to death on March 22 this year.

    One more thing

    Taobao, China’s leading e-commerce platform, announced it’s experimenting with delivering packages by rockets. Yes, rockets. Made by a Chinese startup, Taobao’s pilot rockets will be able to deliver something as big as a car or a truck, and the rockets can be reused for the next delivery. To be honest, I still can’t believe this wasn’t an April Fool’s joke.

    [ad_2]

    Source link

  • Why Threads is suddenly popular in Taiwan

    Why Threads is suddenly popular in Taiwan

    [ad_1]

    Still, Threads’ popularity plummeted after its launch in July 2023. In Taiwan—like the rest of the world—many users left the platform after satisfying their initial curiosity. 

    But the 2024 Taiwanese presidential election gave it another chance. Wang, who studies social media in Taiwan, traced the platform’s second rise to November of last year, starting with the supporters of Taiwan’s Democratic Progressive Party (DPP), often associated with the color green. “Many (worried) pan-green supporters noticed that their complaints on politics were promoted to more readers on Threads than any other social media platforms (especially Facebook and Instagram), so more and more pan-green supporters gathered to Threads and used it as a mobilization tool,” he says.

    The election concluded in mid-January, with DPP candidate Lai Ching-te elected as Taiwan’s president. Many supporters of his party stayed on the platform. And as it became influential, other political figures also reactivated their Threads accounts and started posting regularly, trying to join the conversation. Everyday users who are less interested in politics came along too.

    On almost every day of the past three months, Threads has been the most downloaded social network app in both Apple’s and Android’s app stores in Taiwan, according to Sensor Tower, an app store intelligence firm. It surpassed both Western social platforms and those popular in China. 

    What does Taiwan Threads look like?

    Wang, who has been actively posting on Threads and accumulated over 3,000 followers, observes that there are two major demographics among Taiwan’s Threads users today: the pro-green voters, and younger students who are still in middle school and high school. “In recent weeks, there is a considerable amount of discussion on how to choose colleges, majors, and even high schools,” he says.

    Since Threads doesn’t have an official name in Chinese, Taiwanese users have tried to translate it in creative ways. Some stay close to the meaning and call it 串 or chuan, which means a string of beads or other objects (it could also mean a kebab skewer). Others call it 脆 or cui, which means crispy or fragile. It’s a transliteration attempt that many feel is too far-fetched, but since there’s no sound like “th” in Mandarin, it’s the best alternative, and it has already caught on among the users and surpassed other names. 

    What defines the content on Threads is a mix of political and lifestyle posts. On the one hand, some of the most influential accounts are Taiwanese politicians at all levels, including the presidential candidates. On the other, Threads users have embraced a type of content called 廢文—a cross between trash talk and light-stakes monologue. 

    As a result, to gain a following on Threads, the best practice is to mix up the serious and the unserious. One local representative candidate became unexpectedly famous when people discovered that his son was physically attractive. Joking about how this son’s virality has eclipsed his own, the politician now calls himself “The father of the son of Phoenix Cheng” on Threads, where he has over 268,000 followers.

    [ad_2]

    Source link

  • Elon Musk Gave Himself No Choice but to Open Source His Chatbot Grok

    Elon Musk Gave Himself No Choice but to Open Source His Chatbot Grok

    [ad_1]

    After suing OpenAI this month, alleging the company has become too closed, Elon Musk says he will release his “truth-seeking” answer to ChatGPT, the chatbot Grok, for anyone to download and use.

    “This week, @xAI will open source Grok,” Musk wrote on his social media platform X today. That suggests his AI company, xAI, will release the full code of Grok and allow anyone to use or alter it. By contrast, OpenAI makes a version of ChatGPT and the language model behind it available to use for free but keeps its code private.

    Musk had previously said little about the business model for Grok or xAI, and the chatbot was made available only to Premium subscribers to X. Having accused his OpenAI cofounders of reneging on a promise to give away the company’s artificial intelligence earlier this month, Musk may have felt he had to open source his own chatbot to show that he is committed to that vision.

    OpenAI responded to Musk’s lawsuit last week by releasing email messages between Musk and others in which he appeared to back the idea of making the company’s technology more closed as it became more powerful. Musk ultimately plowed more than $40 million into OpenAI before parting ways with the project in 2018.

    When Musk first announced Grok was in development, he promised that it would be less politically biased than ChatGPT or other AI models, which he and others with right-leaning views have criticized for being too liberal. Tests by WIRED and others quickly showed that although Grok can adopt a provocative style, it is not hugely biased one way or another—perhaps revealing the challenge of aligning AI models consistently with a particular viewpoint.

    Open sourcing Grok could help Musk drum up interest in his company’s AI. Limiting Grok access to only paid subscribers of X, one of the smaller global social platforms, means that it does not yet have the traction of OpenAI’s ChatGPT or Google’s Gemini. Releasing Grok could draw developers to use and build upon the model, and may ultimately help it reach more end users. That could provide xAI with data it can use to improve its technology.

    Musk’s move to liberate Grok sees him align with Meta’s approach to generative AI. Meta’s open source models, like Llama 2, have become popular among developers because they can be fully customized and adapted to different uses. But adopting a similar strategy could draw Musk further into a growing debate over the benefits and risks of giving anyone access to the most powerful AI models.

    Many AI experts argue that open sourcing AI models has significant benefits such as increasing transparency and broadening access. “Open models are safer and more robust, and it’s great to see more options from leading companies in the space,” says Emad Mostaque, founder of Stability AI, a company that builds various open source AI models.



    [ad_2]

    Source link

  • Security News This Week: Russian Hackers Stole Microsoft Source Code—and the Attack Isn’t Over

    Security News This Week: Russian Hackers Stole Microsoft Source Code—and the Attack Isn’t Over

    [ad_1]

    For years, Registered Agents Inc.—a secretive company whose business is setting up other businesses—has registered thousands of companies to people who appear to not exist. Multiple former employees tell WIRED that the company routinely incorporates businesses on behalf of its customers using what they claim are fake personas. An investigation found that incorporation paperwork for thousands of companies that listed these allegedly fake personas had links to Registered Agents.

    State attorneys general from around the US sent a letter to Meta on Wednesday demanding the company take “immediate action” amid a record-breaking spike in complaints over hacked Facebook and Instagram accounts. Figures provided by the office of New York attorney general Letitia James, who spearheaded the effort, show that in 2023 her office received more than 780 complaints—10 times as many as in 2019. Many complaints cited in the letter say Meta did nothing to help them recover their stolen accounts. “We refuse to operate as the customer service representatives of your company,” the officials wrote in the letter. “Proper investment in response and mitigation is mandatory.”

    Meanwhile, Meta suffered a major outage this week that took most of its platforms offline. When it came back, users were often forced to log back in to their accounts. Last year, however, the company changed how two-factor authentication works for Facebook and Instagram. Now, any devices you’ve frequently used with Meta services in recent years will be trusted by default. The move has made experts uneasy; this means that your devices may not need a two-factor authentication code to log in anymore. We updated our guide for how to turn off this setting.

    A ransomware attack targeting medical firm Change Healthcare has caused chaos at pharmacies around the US, delaying delivery of prescription drugs nationwide. Last week, a Bitcoin address connected to AlphV, the group behind the attack, received $22 million in cryptocurrency—suggesting Change Healthcare has likely paid the ransom. A spokesperson for the firm declined to answer whether it was behind the payment.

    And there’s more. Each week, we highlight the news we didn’t cover in depth ourselves. Click on the headlines below to read the full stories. And stay safe out there.

    In January, Microsoft revealed that a notorious group of Russian state-sponsored hackers known as Nobelium infiltrated the email accounts of the company’s senior leadership team. Today, the company revealed that the attack is ongoing. In a blog post, the company explains that in recent weeks, it has seen evidence that hackers are leveraging information exfiltrated from its email systems to gain access to source code and other “internal systems.”

    It is unclear exactly what internal systems were accessed by Nobelium, which Microsoft calls Midnight Blizzard, but according to the company, it is not over. The blog post states that the hackers are now using “secrets of different types” to breach further into its systems. “Some of these secrets were shared between customers and Microsoft in email, and as we discover them in our exfiltrated email, we have been and are reaching out to these customers to assist them in taking mitigating measures.”

    Nobelium is responsible for the SolarWinds attack, a sophisticated 2020 supply-chain attack that compromised thousands of organizations including the major US government agencies like the Departments of Homeland Security, Defense, Justice, and Treasury.

    [ad_2]

    Source link

  • Elon Musk’s Lawsuit Against a Group That Found Hate Speech on X Isn’t Going Well

    Elon Musk’s Lawsuit Against a Group That Found Hate Speech on X Isn’t Going Well

    [ad_1]

    Soon after Elon Musk took control of Twitter, now called X, the platform faced a massive problem: Advertisers were fleeing. But that, the company alleges, was someone else’s fault. On Thursday that argument went before a federal judge, who seemed skeptical of the company’s allegations that a nonprofit’s research tracking hate speech on X had compromised user security, and that the group was responsible for the platform’s loss of advertisers.

    The dispute began in July when X filed suit against the Center for Countering Digital Hate, a nonprofit that tracks hate speech on social platforms and had warned that the platform was seeing an increase in hateful content. Musk’s company alleged that CCDH’s reports cost it millions in advertising dollars by driving away business. It also claimed that the nonprofit’s research had violated the platform’s terms of service and endangered users’ security by scraping posts using the login of another nonprofit, the European Climate Foundation.

    In response, CCDH filed a motion to dismiss the case, alleging that it was an attempt to silence a critic of X with burdensome litigation using what’s known as a “strategic lawsuit against public participation,” or SLAPP.

    On Thursday, lawyers for CCDH and X went before Judge Charles Breyer in the Northern California District Court for a hearing to decide whether X’s case against the nonprofit will be allowed to proceed. The outcome of the case could set a precedent for exactly how far billionaires and tech companies can go to silence their critics. “This is really a SLAPP suit disguised as a contractual suit,” says Alejandra Caraballo, clinical instructor at Harvard Law School’s Cyberlaw Clinic.

    Unforeseen Harms

    X alleges that the CCDH used the European Climate Foundation’s login to a social network listening tool called Brandwatch, which has a license to access X data through the company’s API. In the hearing Thursday, X’s attorneys argued that CCDH’s use of the tool had caused the company to spend time and money investigating the scraping, for which it also needed to be compensated on top of payback for how the nonprofit’s report spooked advertisers.

    Judge Breyer pressed X’s attorney, Jonathan Hawk, on that claim, questioning how scraping posts that were publicly available could violate users’ safety or the security of their data. “If [CCDH] had scraped and discarded the information, or scraped that number and never issued a report, or scraped and never told anybody about it. What would be your damages?” Breyer asked X’s legal team.

    Breyer also pointed out that it would have been impossible for anyone agreeing to Twitter’s terms of service in 2019, as the European Climate Foundation did when it signed up for Brandwatch, years before Musk’s purchase of the platform, to anticipate how its policies would drastically change later. He suggested it would be difficult to hold CCDH responsible for harms it could not have foreseen.

    “Twitter had a policy of removing tweets and individuals who engaged in neo-Nazi, white supremacists, misogynists, and spreaders of dangerous conspiracy theories. That was the policy of Twitter when the defendant entered into its terms of service,” Breyer said. “You’re telling me at the time they were excluded from the website, it was foreseeable that Twitter would change its policies and allow these people on? And I am trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

    [ad_2]

    Source link

  • Elon Musk’s X Gave Check Marks to Terrorist Group Leaders, Report Says

    Elon Musk’s X Gave Check Marks to Terrorist Group Leaders, Report Says

    [ad_1]

    A watchdog group’s investigation found that terrorist group Hezbollah and other US-sanctioned entities have accounts with paid check marks on X, the Elon Musk–owned social network that still resides at the Twitter.com domain.

    The Tech Transparency Project (TTP), a nonprofit that is critical of Big Tech companies, said in a report on Wednesday that “X, the platform formerly known as Twitter, is providing premium, paid services to accounts for two leaders of a US-designated terrorist group and several other organizations sanctioned by the US government.”

    After buying Twitter for $44 billion, Musk started charging users for check marks that were previously intended to verify that an account was notable and authentic. “Along with the check marks, which are intended to confer legitimacy, X promises various perks for premium accounts, including the ability to post longer text and videos and greater visibility for some posts,” the Tech Transparency Project report noted.

    The Tech Transparency Project suggests that X may be violating US sanctions. “The accounts identified by TTP include two that apparently belong to the top leaders of Lebanon-based Hezbollah and others belonging to Iranian and Russian state-run media,” the report said. “The fact that X requires users to pay a monthly or annual fee for premium service suggests that X is engaging in financial transactions with these accounts, a potential violation of US sanctions.”

    Some of the accounts were verified before Musk bought Twitter, but verification was a free service at the time. Musk’s decision to charge for check marks means that X is “providing a premium, paid service to sanctioned entities,” which may raise “new legal issues,” the Tech Transparency Project said.

    Report Details 28 Check-Marked Accounts

    Musk’s X charges $1,000 a month for a Verified Organizations subscription and last month added a basic tier for $200 a month. For individuals, the X Premium tiers that come with check marks cost $8 or $16 a month.

    It’s possible for US companies to receive a license from the government to engage in certain transactions with sanctioned entities, but it doesn’t seem likely that X has such a license. X’s rules explicitly prohibit users from purchasing X Premium “if you are a person with whom X is not permitted to have dealings under US and any other applicable economic sanctions and trade compliance law.”

    In all, the Tech Transparency Project said it found 28 “verified” accounts tied to sanctioned individuals or entities. These include individuals and groups listed by the US Treasury Department’s Office of Foreign Assets Control (OFAC) as Specially Designated Nationals.

    “Of the 28 X accounts identified by TTP, 18 show they got verified after April 1, 2023, when X began requiring accounts to subscribe to paid plans to get a check mark. The other 10 were legacy verified accounts, which are required to pay for a subscription to retain their check marks,” the group wrote, adding that it “found advertising in the replies to posts in 19 of the 28 accounts.”

    X issued the following statement on Wednesday: “X has a robust and secure approach in place for our monetization features, adhering to legal obligations, along with independent screening by our payments providers. Several of the accounts listed in the Tech Transparency Report are not directly named on sanction lists, while some others may have visible account check marks without receiving any services that would be subject to sanctions. Our teams have reviewed the report and will take action if necessary. We’re always committed to ensuring that we maintain a safe, secure and compliant platform.”

    X Removes Some Check Marks

    An account with the handle @SH_NasrallahEng appears to be tied to Hezbollah leader Hassan Nasrallah, the TTP report said. The account had a check mark when we first checked it earlier Wednesday, but it has since been removed.

    “The account, which has 93,600 followers, posts English-language Hezbollah messages and memes disparaging Israel and the US. It was created in October 2021 and verified in November 2023, the same month that Nasrallah threatened further escalation of Israel’s war with Hamas,” the report said.



    [ad_2]

    Source link