Tag: content moderation

  • The Sticky Dilemmas of Pornhub’s Next Chapter

    The Sticky Dilemmas of Pornhub’s Next Chapter

    [ad_1]

    Videos of minors. Illegal data collection. Lack of oversight. Lawsuits. Problems have dogged the popular porn site for years. Is its promise of transparency enough for a reset?

    [ad_2]

    Source link

  • Trump FCC Pick Brendan Carr Wants to Be the Speech Police. That’s Not His Job

    Trump FCC Pick Brendan Carr Wants to Be the Speech Police. That’s Not His Job

    [ad_1]

    “What he can do and wants to do is use his bully pulpit to bully companies that moderate content in a way he doesn’t like,” says Evan Greer, director of Fight for the Future, a digital rights advocacy group. “And if he continues to do that, he’s very likely to run smack into the First Amendment, which, contrary to misconception, is the real thing that protects online speech.” Section 230 protects social media companies from being sued over the content users post on their platforms, while the First Amendment explicitly bars the government from interfering in someone’s ability to exercise free speech. Over the summer, the Supreme Court ruled that a company’s moderation decisions are protected under the First Amendment.

    As for Section 230, the Supreme Court may have just made it more difficult for administrative agencies like the FCC to reinterpret it to their liking. Over the summer, the Supreme Court overturned Chevron v. Natural Resources Defence Council (NRDC), a decision that had allowed for government agencies to independently interpret their authorities. With the Chevron deference made mute, it could be an uphill battle for the FCC to make its own interpretations of the law.

    “Agencies are basically losing the ability to interpret how they can enforce when language is vague in the statute,” says Lewis. “Section 230’s language is actually very short and very straightforward and has no FCC action attached to it.” If Carr decided to issue a rule modifying Section 230, it would likely be met with legal challenges. Still, Republicans currently control all three branches of government and could either rule in the administration’s favor or pass new legislation putting the FCC as the top cop on the beat.

    Trump has tried to deputize the FCC into policing online speech before. In 2020, Trump signed an executive order instructing the FCC to begin a rule-making process to reinterpret when Section 230 would apply to social platforms like Facebook and Instagram. The Center for Democracy and Technology, which receives funding from big tech companies, challenged the order as unconstitutional, saying that it unfairly punished X, then known as Twitter, “to chill the constitutionally protected speech of all online platforms and individuals.”

    Months later, the FCC general counsel Tom Johnson published a blog post arguing that the agency has the authority to reinterpret the foundational internet law. A few days after that, then-FCC chairman Ajit Pai announced that the agency would move forward on a rule-making process, but no rule was ordered before President Joe Biden’s inauguration, giving Democrats control over agency decisions.

    [ad_2]

    Source link

  • Inside Two Years of Turmoil at Big Tech’s Anti-Terrorism Group

    Inside Two Years of Turmoil at Big Tech’s Anti-Terrorism Group

    [ad_1]

    The four tech giants have presided over the consortium since they announced it in 2016, when Western governments were berating them for allowing Islamic State to post gruesome videos of journalists and humanitarians being beheaded. Now with a staff of eight, GIFCT—which the board organized as a US nonprofit in 2019 after the Christchurch massacre—is one of the groups through which tech competitors are meant to work together to address discrete online harms, including child abuse and the illicit trade of intimate images.

    The efforts have helped bring down some unwelcome content, and pointing to the work can help companies stave off onerous regulations. But the politics involved in managing the consortia generally stay secret.

    Just eight of GIFCT’s 25 member companies answered WIRED’s requests for comment. The respondents, which included Meta, Microsoft, and YouTube, all say they are proud to be part of what they view as a valuable group. The consortium’s executive director, Naureen Chowdhury Fink, didn’t dispute WIRED’s reporting. She says TikTok remains in the process to attain membership.

    GIFCT has relied on voluntary contributions from its members to fund the roughly $4 million it spends annually, which covers salaries, research, and travel. From 2020 through 2022, Microsoft, Google, and Meta each donated a sum of at least $4 million and Twitter $600,000, according to the available public filings. Some other companies contributed tens of thousands or hundreds of thousands of dollars, but most paid nothing.

    By last year, at least two board members were enraged at companies they perceived as freeloaders, and fears spread among the nonprofit’s staff over whether their jobs were in jeopardy. It didn’t help that as Musk turned Twitter into X about a year ago, he kept slashing costs, including suspending the company’s optional checks to GIFCT, according to two people with direct knowledge.

    To diversify funding, the board has signed off on soliciting foundations and even exploring government grants for non-core projects. “We’d really have to carefully consider if it makes sense,” Chowdhury Fink says. “But sometimes working with multiple stakeholders is helpful.”

    Rights activists the group privately consulted questioned whether this would count as subsidies for tech giants, which could siphon resources from potentially more potent anti-extremism projects. But records show staff were considering seeking a grant of more than tens of thousands of dollars from the pro-Israel philanthropy Newton and Rochelle Becker Charitable Trust. Chowdhury Fink says GIFCT didn’t end up applying.

    This year, Meta, YouTube, Microsoft, and X amended GIFCT’s bylaws to require minimum annual contributions from every member starting in 2025, though Chowdhury Fink says exemptions are possible.

    Paying members will be able to vote for two board seats, she says. Eligibility for the board is contingent on making a more sizable donation. X had signaled it wouldn’t pay up and would therefore forfeit its seat, two sources say—a development that ended up happening this month. It had been scheduled to hold tiebreaking power among the four-company board in 2025. (Under the bylaws, Meta, YouTube, and Microsoft could have ejected Twitter from the board as soon as Musk acquired the company. But they chose not to exercise the power.)

    [ad_2]

    Source link

  • X’s First Transparency Report Since Elon Musk’s Takeover Is Finally Here

    X’s First Transparency Report Since Elon Musk’s Takeover Is Finally Here

    [ad_1]

    Today, X released the company’s first transparency report since Elon Musk bought the company, formerly Twitter, in 2022.

    Before Musk’s takeover, Twitter would release transparency reports every six months.These largely covered the same ground as the new X report, giving specific numbers for takedowns, government requests for information, and content removals, as well as data about which content was reported and, in some cases, removed for violating policies. The last transparency report available from Twitter covered the second half of 2021 and was 50 pages long. (X’s is a shorter 15 pages, but requests from governments are also listed elsewhere on the company’s website and have been consistently updated to remain in compliance with various government orders.)

    Comparing the 2021 report to the current X transparency report is a bit difficult, as the way the company measures different things has changed. For instance, in 2021, 11.6 million accounts were reported. Of this 11.6 million, 4.3 million were “actioned” and 1.3 million were suspended. According to the new X report, there were over 224 million reports, of both accounts and pieces of individual content, but the result was 5.2 million accounts being suspended.

    While some numbers remain seemingly consistent across the reports—reports of abuse and harassment are, somewhat predictably, high—in other areas, there’s a stark difference. For instance, in the 2021 report, accounts reported for hateful content accounted for nearly half of all reports, and 1 million of the 4.3 million accounts actioned. (The reports used to be interactive on the website; the current PDF no longer allows users to flip through the data for more granular breakdowns.) In the new X report, the company says it has taken action on only 2,361 accounts for posting hateful content.

    But this may be due to the fact that X’s policies have changed since it was Twitter, which Theodora Skeadas, a former member of Twitter’s public policy team who helped put together its Moderation Research Consortium, says might change the way the numbers look in a transparency report. For instance, last year the company changed its policies on hate speech, which previously covered misgendering and deadnaming, and rolled back its rules around Covid-19 misinformation in November of 2022.

    “As certain policies have been modified, some content is no longer violative. So if you’re looking at changes in the quality of experience, that might be hard to capture in a transparency report,” she says.

    X has also lost users since Musk’s takeover, further complicating what the new reality of the platform might look like. “If you account for changing usage, is it a lower number?” she asks.

    After taking over the company in October of 2022, Musk fired the majority of the company’s trust and safety staff as well as its policy staff, the people who make the platform’s rules and ensure they’re enforced. Under Musk, the company also began charging for its API, making it harder for researchers and nonprofits to access X data to see what was really going on on the platform. This may also account for changes between the two reports.

    [ad_2]

    Source link

  • Why It’s So Hard to Fully Block X in Brazil

    Why It’s So Hard to Fully Block X in Brazil

    [ad_1]

    The social network X has been largely inaccessible in Brazil since Saturday, after the country’s Supreme Court ordered all mobile and internet service providers to block the platform. The court order followed a months-long dispute between Judge Alexandre de Moraes and X CEO Elon Musk over the company’s misinformation, hate speech, and moderation policies.

    With Brazil’s population of 215 million people, a mature democracy, a sprawling land mass, and more than 20,000 internet service providers, it isn’t straightforward to block a web platform in the South American nation. And while the biggest ISPs have implemented the ban, many are still scrambling to comply with the order, leaving a patchwork of access to the site.

    “Brazil has made headway blocking X on the main internet providers, but our telemetry indicates there’s a long tail of local and regional ISPs where the service is still available,” says Isik Mater, director of research at the internet censorship analysis group NetBlocks.

    The Open Observatory of Network Interference reported that a similar progression played out in when Brazil’s Federal Police obtained a court order in April 2023 for ISPs to block the communication platform Telegram because it would not fully share information about users involved in neo-Nazi group chats. Some large ISPs began blocking Telegram immediately; “however, the block was not implemented by all ISPs in Brazil, nor was it implemented in the same way,” the group wrote. “This suggests lack of coordination between providers, and that each ISP implemented the block autonomously.”

    A similar progression has been playing out with the X ban. Brazil’s 20,000 ISPs produce a notably competitive market, but only a few have infrastructure nationwide. About 40 percent are tiny regional providers with 5,000 customers or fewer. The human and digital rights watchdog Freedom House rates Brazil’s internet freedom as “partly free” and trending to be more restrictive, because of the country’s far reaching efforts to crack down on political misinformation in recent years and its three-day ban on Telegram. Brazil also blocked the secure communication platform WhatsApp in December 2015 and again in May 2016 because it did not respond to similar data requests.

    Brazil’s National Telecommunications Agency ANATEL did not respond to WIRED’s multiple requests for comment.

    Unlike in countries including Russia, Iran, and China, there is currently no legal apparatus or technical infrastructure by which the Brazilian government can systematically and comprehensively restrict access to particular websites or online platforms or impose connectivity blackouts on its citizens.

    Reports indicate that many Brazilian ISPs that have implemented the ban are using the technique known as “DNS filtering” to block access to X. The Domain Name System is the internet’s phonebook for looking up the IP addresses associated with URLs like www.wired.com. DNS queries are sent to a DNS “resolver” that does the IP address lookups, and ISPs can configure their resolvers to filter or block requests for particular websites.

    Mobile apps like X’s Android and iOS apps don’t rely on DNS, though, so DNS filtering alone is not enough to block all connections to a web platform. Some Brazilian ISPs seem to also be using IP address “sinkholing”—redirecting online traffic to a different server than the users intended to visit—as a way to send traffic meant for X into the abyss.

    “We’re seeing variation by provider in Brazil and right now it looks they’re each trying their own thing to see what works,” NetBlocks’ Mater says. “Brazil has a diverse network infrastructure with lots of ways for data to enter and leave the country, so there isn’t that centralized choke point and ‘kill switch’ we see in [some] authoritarian-leaning countries.”

    VPN usage has surged in Brazil this week under the ban as a way around ISP attempts to block X, but the court order ban includes a provision that people could be charged a fine of 50,000 reais—about $8,900—per day for using circumvention tools like VPNs.

    [ad_2]

    Source link

  • Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    [ad_1]

    This week Mark Zuckerberg sent a letter to Jim Jordan, the chair of the House Judiciary Committee. For months, the GOP-led committee has been on a crusade to prove that Meta, via its once-eponymous Facebook app, engaged in political sabotage by taking down right-wing content. Its investigation has involved thousands of documents, and the committee interviewed multiple employees, which failed to locate a smoking gun. Now, under the guise of offering his take on the subject, Zuckerberg’s letter is a mea culpa where he seems to indicate that there was something to the GOP conspiracy theory.

    Specifically, he said that in 2021 the Biden administration asked Meta “to censor some Covid-related content.” Meta did take the posts down, and Zuckerberg now regrets the decision. He also conceded that it was wrong to take down some content regarding Hunter Biden’s laptop, which the company did after the FBI warned that the reports might be Russian disinformation.

    What stood out to me, besides the letter’s simpering tone, was how Zuckerberg used the word “censor.” For years the right has been using that word to describe what it regards as Facebook’s systematic suppression of conservative posts. Some state attorneys general have even used that trope to argue that the company’s content should be regulated, and Florida and Texas have passed laws to do just that. Facebook has always contended that the First Amendment is about government suppression, and by definition its content decisions could not be characterized as such. Indeed, the Supreme Court dismissed the lawsuits and blocked the laws.

    Now, by using that term to describe the removal of the Covid material, Zuckerberg seems to be backing down. After years of insisting that, right or wrong, a social media company’s content decisions did not deprive people of First Amendment rights—and in fact said that by making such decisions, the company was invoking its free speech rights—Zuckerberg is now handing its conservative critics just what they wanted.

    I asked Meta spokesperson Andy Stone if the company now agrees with the GOP that some of its decisions to take down content can be referred to as “censoring.” Stone said that Zuckerberg was referring to the government when he used that term. But he also pointed me to Zuckerberg’s affirmation that the ultimate decision to remove the posts was Meta’s own. (Responding to the Zuckerberg letter, the White House said, “When confronted with a deadly pandemic, this Administration encouraged responsible actions to protect public health and safety,” and left the final decision to Facebook.)

    Meta can’t have it both ways, The letter is clear—Zuckerberg said the government pressured Meta to “censor” some Covid content. Meta took that material down. Ergo, Meta now characterizes some of its own actions as censorship. Seizing on this, the GOP members of the Judiciary Committee quickly tweeted that Zuckerberg has now outright admitted “Facebook censored Americans.”

    Stone did say that Meta still does not consider itself a censor. So is Meta disputing that GOP tweet? Stone wouldn’t comment on it. It seems that Meta will offer no pushback while GOP legislators and right-wing commentators crow that Facebook now concedes that it blatantly censored conservatives as a matter of policy.

    Meta’s CEO presented Jordan and the GOP with another gift in his letter, involving his private philanthropy. During the 2020 election, Zuckerberg helped fund nonpartisan initiatives to protect people’s right to vote. Republicans criticized Zuckerberg’s effort as aiding the Democrats. Zuckerberg still insists he wasn’t advocating that people vote a certain way, just ensuring they were free to cast ballots. But, he wrote Jordan, he recognized that some people didn’t believe him. So, apparently to indulge those ill-informed or ill-intentioned critics, he now vows not to fund bipartisan voting efforts during this election cycle. “My goal is to be neutral and not play a role one way or another—or even appear to play a role,” he wrote.

    [ad_2]

    Source link

  • Telegram Faces a Reckoning in Europe. Other Founders Should Beware

    Telegram Faces a Reckoning in Europe. Other Founders Should Beware

    [ad_1]

    “[Elon] Musk and fellow executives should be reminded of their criminal liability,” said Bruce Daisley, a former executive at Twitter, who worked at the company’s British office, days after British protesters tried to set fire to a hotel for asylum seekers.

    But Telegram has provoked politicians more than any other platform. What could be called the company’s uncollaborative approach has put the platform—part messaging app, part social media network—on a collision course with governments around the world.

    The case in France is far from the first time Telegram has been reprimanded by authorities for its refusal to cooperate. Telegram has been temporarily suspended twice in Brazil, in 2022 and 2023, both times after being accused of failing to cooperate with legal orders.

    In 2022, similar events unfolded in Germany when the country’s interior minister also threatened to ban the app after letters, suggestions of fines and even a Telegram-dedicated task force all went unanswered, according to the authorities, who were concerned about anti-lockdown groups using the app to discuss political assassinations. Multiple German newspapers, including the tabloid Bild, sent journalists to the office Telegram states as its headquarters in Dubai and found it deserted, its doors locked.

    Earlier in 2024, Spain briefly blocked Telegram after broadcasters claimed copyrighted material was circulating on the app. Judge Santiago Pedraz of Spain’s National High Court said his decision to ban was based on Telegram’s lack of cooperation with the case.

    The accusations in France are very specific to Telegram’s way of working, says Arne Möhle, co-founder of encrypted email service Tuta. “Of course it’s important to be independent but at the same time, it’s also important to comply with authority requests if they are valid,” he says. “It’s important to show [criminal activities are] something you don’t want to support with your privacy oriented service.”

    France’s decision to charge Durov is a rare move to link a tech executive to crimes taking place on their platform, but it is not without precedent. Durov joins the ranks of the founders of The Pirate Bay, who were sentenced by Swedish authorities to a year in prison in 2009; and the German born founder of MegaUpload, Kim Dotcom, who finally lost a 12 year battle to be extradited to the US from his home in New Zealand in August. He plans to appeal.

    Yet Durov is the first of his generation of founders behind major social media platforms to face such severe consequences. What happens next will carry lessons for them all.

    “When Meta and GOOG [Google] get legit subpoenas, they respond. They also push back on garbage ones. It’s a professional give and take,” Brian Fishman, former policy director of counterterrorism at Facebook, said on Threads before Durov was formally charged. He claimed that Telegram mostly doesn’t do this.

    “Should we watch here for dangerous precedent? Yes. But we should also acknowledge how brazenly Telegram has flouted norms adopted by nearly everyone else.”

    [ad_2]

    Source link

  • Telegram Founder Pavel Durov Charged Over Alleged Criminal Activity on the App

    Telegram Founder Pavel Durov Charged Over Alleged Criminal Activity on the App

    [ad_1]

    Telegram CEO Pavel Durov is forbidden from leaving French territory after being charged for complicity in running an online platform that allegedly enabled the spread of sexual images of children, creating an uncertain future for the messaging app that has become one of the world’s biggest social media platforms.

    Durov was arrested on Saturday at 8 pm local time after his private jet landed at an airport near Paris. He was then detained for four days as part of an investigation into alleged criminal activity taking place on Telegram. On Wednesday evening, local time, he was indicted and forbidden from leaving the country, according to a statement released by the Paris Prosecutor. He was released under judicial supervision, the statement said, must post a €5 million ($5.5m) bail and report to a police station in France twice a week.

    The Telegram founder was placed under formal investigation for a range of charges related to child sexual abuse material, drug trafficking, importing cryptology without prior declaration as well as a “near-total absence” of cooperation with French authorities, Laure Beccuau, the Paris Prosecutor, said on Wednesday.

    French authorities noted an “almost total lack of response from Telegram to legal requests,” Beccuau noted. “This is what led JUNALCO [the National Jurisdiction for the Fight against Organized Crime] to open an investigation into the possible criminal liability of this messaging service’s executives in the commission of these offenses,” she said. The preliminary investigation began in February 2024 and initial investigations were coordinated by the OFMIN, an agency set up to prevent violence against minors, her statement added.

    “It is absurd to claim that a platform or its owner is responsible for the abuse of that platform,” Telegram said on Sunday, before Durov was charged. The platform, which has 900 million active users, did not immediately respond to a request for comment to the charges.

    Since his arrest, both the UAE and Russia have requested consular access to Durov, who has citizenship in both countries. It’s unclear why Durov, who also obtained a French passport after leaving Russia, was in France. “I don’t take holidays,” he said on his Telegram channel in June.

    Russia has claimed, without evidence, that Durov’s arrest is an attempt by the United States to exert influence over the platform via France. “Telegram is one of the few and at the same time the largest Internet platforms over which the United States has no influence,” Vyacheslav Volodin, the chairman of Russia’s State Duma, the lower house of parliament, said on the app.

    France’s president, Emmanuel Macron, said on Monday that Durov’s detention is “in no way a political decision.” “It is up to the judiciary, in full independence, to enforce the law,” he added in a post on X. The European Commission tells WIRED the arrest was conducted under French criminal law and is not connected to new European regulation for tech platforms. “We are closely monitoring the developments related to Telegram and stand ready to cooperate with the French authorities should it be relevant,” a spokesperson says, declining to be named.

    [ad_2]

    Source link

  • Pavel Durov’s Arrest Leaves Telegram Hanging in the Balance

    Pavel Durov’s Arrest Leaves Telegram Hanging in the Balance

    [ad_1]

    “Civil society has had a complicated relationship with Telegram over the years,” says Natalia Kapriva, a lawyer at the digital rights group Access Now. “We have defended Telegram against attempts by authoritarian regimes to block and coerce the platform into providing encryption keys, but we have also been raising alarms about Telegram’s lack of human rights policies, reliable channel of communication, and remedy for its users.” Kapriva stresses that French authorities may try to force Durov to provide Telegram’s encryption keys to decrypt private messages, “which Russia has already tried to do in the past.”

    The hashtag #FreePavel has been spreading online, including via X’s CEO, Elon Musk, who has posted numerous times about Durov’s arrest. “POV: It’s 2030 in Europe and you’re being executed for liking a meme,” he wrote on Saturday night in response to a post about the Telegram CEO’s detention. “The need to protect free speech has never been more urgent,” Robert F. Kennedy Jr., who on Friday endorsed Donald Trump for US president, wrote on X, where he referred to Telegram as “uncensored” and “encrypted.”

    While Telegram is frequently described as an encrypted messaging app, messages are not end-to-end encrypted by default, and senior executives previously told WIRED that they view the platform as a social network. This is largely due to Channels—an one-to-many broadcast feature that allows unlimited subscribers to view posts.

    One of the posts that has gained the most traction on X was by right-wing former Fox News journalist Tucker Carlson, who alluded to the oft-repeated but debatable story that Durov left Russia because the government tried to take over his company. “But in the end, it wasn’t Putin who arrested him for allowing the public to exercise free speech. It was a western country,” Carlson wrote in a post that has so far been viewed at least 5.7 million times. Carlson also linked to an hour-long interview he did with Durov earlier this year, one of the first and only interviews the Telegram CEO has given in recent years.

    In Durov’s absence, Telegram’s future looks uncertain to some: “I am in shock, and everyone close to Pavel feels the same,” says Georgy Lobushkin, former head of PR at VK, a social network Durov cofounded, who is still in regular contact with Durov. “Nobody was prepared for this situation.” Asked if he worried about Telegram’s future and who could run the company in Durov’s absence, Lobushkin says: “[I] worry a lot.”

    TF1Info, which first broke the news in France of Durov’s arrest, reported that it was “beyond doubt” that Durov would remain in custody during the investigation. “Pavel Durov will end up in pretrial detention, that’s for sure,” one unnamed investigator told reporters.

    “No one in Telegram was prepared for such a scenario,” says Anton Rozenberg, who worked with Durov from the early days of VK in 2007, before working for Telegram from 2016 to 2017. Rozenberg foresaw Durov acquiring the best legal defense money could buy. “But without him, the messenger may have huge problems with management, all crucial decisions and even payments,” he added, given Durov’s personal involvement in running the company. Rozenberg saw no obvious replacement for Durov, who makes key decisions on nearly all matters at Telegram—financing, development strategies, product design, monetization, and content moderation policy.

    For now, everything can be expected to continue as normal, says Elies Campo, who directed Telegram’s growth, business, and partnerships from 2015 to 2021. “Depending on how long this is going to last, it’s like a government, right? There’s this structure, there’s self-momentum.” Campo adds that the company’s staff is small enough—around 60 employees—that the infrastructure won’t be affected.

    The challenge, Campo concedes, would be if Durov needs to be physically present to pay providers—something Rozenberg also flagged.

    “As far as I know, Pavel did the payments,” Campo says. “So what’s going to happen when there needs to be some payments for infrastructure providers, or providers in terms of connectivity—and he’s still under arrest?”

    [ad_2]

    Source link

  • A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    [ad_1]

    Allen, a data scientist, and Massachi, a software engineer, worked for nearly four years at Facebook on some of the uglier aspects of social media, combating scams and election meddling. They didn’t know each other but both quit in 2019, frustrated at feeling a lack of support from executives. “The work that teams like the one I was on, civic integrity, was being squandered,” Massachi said in a recent conference talk. “Worse than a crime, it was a mistake.”

    Massachi first conceived the idea of using expertise like that he’d developed at Facebook to drive greater public attention to the dangers of social platforms. He launched the nonprofit Integrity Institute with Allen in late 2021, after a former colleague connected them. The timing was perfect: Frances Haugen, another former Facebook employee, had just leaked a trove of company documents, catalyzing new government hearings in the US and elsewhere about problems with social media. It joined a new class of tech nonprofits such as the Center for Humane Technology and All Tech Is Human, started by people working in industry trenches who wanted to become public advocates.

    Massachi and Allen infused their nonprofit, initially bankrolled by Allen, with tech startup culture. Early staff with backgrounds in tech, politics, or philanthropy didn’t make much, sacrificing pay for the greater good as they quickly produced a series of detailed how-to guides for tech companies on topics such as preventing election interference. Major tech philanthropy donors collectively committed a few million dollars in funding, including the Knight, Packard, MacArthur, and Hewlett foundations, as well as the Omidyar Network. Through a university-led consortium, the institute got paid to provide tech policy advice to the European Union. And the organization went on to collaborate with news outlets, including WIRED, to investigate problems on tech platforms.

    To expand its capacity beyond its small staff, the institute assembled an external network of two dozen founding experts it could tap for advice or research help. The network of so-called institute “members” grew rapidly to include 450 people from around the world in the following years. It became a hub for tech workers ejected during tech platforms’ sweeping layoffs, which significantly reduced trust and safety, or integrity, roles that oversee content moderation and policy at companies such as Meta and X. Those who joined the institute’s network, which is free but involves passing a screening, gained access to part of its Slack community where they could talk shop and share job opportunities.

    Major tensions began to build inside the institute in March last year, when Massachi unveiled an internal document on Slack titled “How We Work” that barred use of terms including “solidarity,” “radical,” and “free market,” which he said come off as partisan and edgy. He also encouraged avoiding the term BIPOC, an acronym for “Black, Indigenous, and people of color,” which he described as coming from the “activist space.” His manifesto seemed to echo the workplace principles that cryptocurrency exchange Coinbase had published in 2020, which barred discussions of politics and social issues not core to the company, drawing condemnation from some other tech workers and executives.

    “We are an internationally-focused open-source project. We are not a US-based liberal nonprofit. Act accordingly,” Massachi wrote, calling for staff to take “excellent actions” and use “old-fashioned words.” At least a couple of staffers took offense, viewing the rules as backward and unnecessary. An institution devoted to taming the thorny challenge of moderating speech now had to grapple with those same issues at home.

    [ad_2]

    Source link