Tag: facebook

  • Meta Releases Llama 3.2—and Gives Its AI a Voice

    Meta Releases Llama 3.2—and Gives Its AI a Voice

    [ad_1]

    Mark Zuckerberg announced today that Meta, his social-media-turned-metaverse-turned-artificial intelligence conglomerate, will upgrade its AI assistants to give them a range of celebrity voices, including those of Dame Judi Dench and John Cena. The more important upgrade for Meta’s long-term ambitions, though, is the new ability of its models to see users’ photos and other visual information.

    Meta today also announced Llama 3.2, the first version of its free AI models to have visual abilities, broadening their usefulness and relevance for robotics, virtual reality, and so-called AI agents. Some versions of Llama 3.2 are also the first to be optimized to run on mobile devices. This could help developers create AI-powered apps that run on a smartphone and tap into its camera or watch the screen in order to use apps on your behalf.

    “This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” Zuckerberg said on stage at Connect, a Meta event held in California today.

    Given Meta’s enormous reach with Facebook, Instagram, WhatsApp, and Messenger, the assistant upgrade could give many people their first taste of a new generation of more vocal and visually capable AI helpers. Meta said today that more than 180 million people already use Meta AI, as the company’s AI assistant is called, every week.

    Zuckerberg demonstrated a number of new AI features at Connect. He showed videos in which a pair of Ray Ban smart glasses running Llama 3.2 give recipe advice based on the ingredients in view, and provide commentary on clothing seen on a rack in a store. Meta’s CEO also showed off several experimental AI features that the company is working on. These include software that enables live translation between Spanish and English, automatic dubbing of videos into different languages, and an avatar for creators that can answer fan questions on their behalf.

    Meta has lately given its AI a more prominent billing in its apps—for example, making it part of the search bar in Instagram and Messenger. The new celebrity voice options available to users will also include Awkwafina, Keegan Michael Key, and Kristen Bell.

    Meta previously gave celebrity personas to text-based assistants, but these characters failed to gain much traction. In July the company launched a tool called AI Studio that lets users create chatbots with any persona they choose. Meta says the new voices will be made available to users in the US, Canada, Australia, and New Zealand over the next month. The Meta AI image capabilities will be rolled out in the US, but the company did not say when the features might appear in other markets.

    The new version of Meta AI will also be able to provide feedback on and information about users’ photos; for example, if you’re unsure what bird you’ve snapped a picture of, it can tell you the species. And it will be able to help edit images by, for instance, adding new backgrounds or details on demand. Google released a similar tool for its Pixel smartphones and for Google Photos in April.

    [ad_2]

    Source link

  • The Viral ‘Goodbye Meta AI’ Copypasta Will Not Protect You

    The Viral ‘Goodbye Meta AI’ Copypasta Will Not Protect You

    [ad_1]

    “Goodbye Meta AI” is the most recent Facebook copypasta to go viral online. A chunky wall of text pasted against a hazy orange-yellow gradient background, it’s complete with all the trend’s hallmarks: vague references to the legal system and unilateral declarations of personal protection. It almost feels nostalgic, a blast from the compulsory chain-email past. But, unfortunately, posting an image on Facebook, Instagram, or any social media platform is not how you actually opt out of having your posts be fed to AI models.

    This definitely isn’t the first time a meaningless copypasta has spread on the social media site. More than a decade ago, WIRED covered a popular “copyright hoax” with “pseudo-legalese” blanketing Facebook. It didn’t work then, and it doesn’t work now.

    “Goodbye Meta AI,” which has been shared thousands of times—including, reportedly, in the Instagram Stories of Tom Brady and James McAvoy—has been circulating since early September. Its claim that it can protect your data is blatantly dubious to savvy internet users, but the underlying desire to claw back one’s personal information from tech companies is a sympathetic one. The companies know so many granular details about users’ lives and desires that it can be unsettling. And, in the ongoing wave of generative AI, everything posted online seems vulnerable to being scraped to train the next biggest, baddest AI model.

    Two major red flags that can help you immediately spot a copypasta like this are urgent calls to action and unclear references to legal situations. In this case, the image says “all members must post” to keep their data safe, and it claims to be part of an unnamed attorney’s advice. The 2012 version said, “Anyone reading this can copy this text and paste it on their Facebook Wall.” The decade-old copypasta also included a misspelled reference to a European legal contract.

    “While we don’t currently have an opt-out feature, we’ve built in-platform tools that allow people to delete their personal information from chats with Meta AI across our apps,” says Emil Vazquez, a spokesperson for the company, when reached via email. You can find the steps for that here. He also points out European users can object to personal info being used for AI models—although, as WIRED reported last year, the form to object isn’t going to do much, if anything, for you.

    So, if an errant copypasta doesn’t work, what can you do to avoid having your public words and images be used for Meta’s AI model or that of another AI company? Stop posting online—that’s about it. Apart from walking away and never posting again, there’s not a realistic way for you to avoid the nimble scraper bots as an individual user right now.

    With that in mind, you can take steps to reduce the amount of information publicly available on your social media profiles, for a bit more privacy. Also, downloading old posts for your own records then deleting large swathes of them from the internet isn’t a bad idea. Want to go further? Take a look at this list of websites and apps which allow you to opt out of least an aspect of their AI training practices.

    [ad_2]

    Source link

  • Russia-Backed Media Outlets Are Under Fire in the US—but Still Trusted Worldwide

    Russia-Backed Media Outlets Are Under Fire in the US—but Still Trusted Worldwide

    [ad_1]

    In Latin America alone, RT’s channels run 24/7, and reported 18 million viewers in 2018. African Stream, which was also named by the State Department as part of Russian state media’s influence architecture and later removed by YouTube and Meta, garnered 460,000 followers on YouTube in the two years it was up and running. And Woolley notes that in these markets, there is likely less competition for viewership than there is in the saturated US media landscape.

    “[Russian media] made headway in limited media ecosystems, where its attempts to control public opinion are arguably much more effective,” he says. Russian media particularly hones in on anti-colonial, anti-Western narratives that can feel particularly salient in markets that have been deeply impacted by Western imperialism. The US also has state-funded media that operates in foreign countries, like Voice of America, though according to the organization’s website, the 1994 U.S. International Broadcasting Act “prohibits interference by any US government official in the objective, independent reporting of news.”

    Rubi Bledsoe, a research associate at Center for Strategic and International Studies, says that even with Russian state media removed from some social platforms, its messages are still likely to spread in more covert ways, through influencers and smaller publications with which it has cultivated relationships.

    “Not only was Russian media good at hiding that they were a Russian government entity, on the side they would seed some of their stories to local newspapers and local media throughout the region,” she says, noting that the large South American broadcasting corporation TeleSur would sometimes partner with RT. (Other times, Russia will back local outlets like Cameroon’s Afrique Média). “All of these secondary and tertiary news outlets are a lot smaller, but can talk to parts of the local population,” she says.

    Russian media has also helped cultivate local influencers who often align with its messaging. Bledsoe points to Inna Afinogenova, a Russian Spanish-language broadcaster who previously worked for RT but now has her own independent YouTube channel where she has more than 480,000 followers. (Afinogenova left RT after saying she disagreed with the war in Ukraine).

    And Bledsoe says that the ban from the US might actually be a boon for Russian media in the parts of the world where it’s actively trying to cultivate its image as a trusted media brand. “The narratives that were shared through RT and other Russian media and in Iranian media as well, it’s a kind of anti-imperialist dig at the West, and the US,” she says. “Saying the US is the driving force behind this international system and they’re plotting, and they’re out to get you, to impose on other countries’ sovereignty.”

    Though Meta was a key avenue for the spread of Russian state media content, it still has a home on other platforms. RT does not appear to have a verified TikTok account, but accounts that exclusively post RT content, like @russian_news_ and @russiatodayfrance have tens of thousands of followers on the app. African Stream’s TikTok is still live with nearly 1 million followers. TikTok spokesperson Jamie Favazza referred WIRED to the company’s policies on election-related mis- and disinformation.

    A post on X on from RT’s account on September 18, the day after the ban linked to its accounts on platforms like right-wing video sharing platform Rumble, X, and Russian YouTube alternative VK. (RT has 3.2 million followers on X and 125,000 on Rumble). “Meta can ban us all it wants,” the post read. “But you can always find us here.” X did not respond to a request for comment.



    [ad_2]

    Source link

  • Meta Connect 2024: How to Watch and What to Expect

    Meta Connect 2024: How to Watch and What to Expect

    [ad_1]

    Meta Connect, the big developer event and hardware showcase from the company that runs Facebook and Instagram, is kicking off next week. Meta is likely to show off its new VR and mixed-reality technology, put a shiny polish on its meandering metaverse ambitions, and delve into all the fresh ways it plans to squeeze artificial intelligence into every crevice of its devices and services.

    The event takes place on Wednesday September 25, starting at 10 am Pacific time. The keynote address, where most of the new stuff will be announced, will be livestreamed. The host for the event will be Meta CEO and newly minted cool guy Mark Zuckerberg. Zuck’s hour-long presentation will be followed by a developer-focused address at 11 am led by Meta CTO and Reality Labs chief Andrew Bosworth. You can watch the events on the Meta Connect website or on Meta’s YouTube channel. And yes, you can also watch it in VR in Meta Horizon.

    The focus of the event will likely be a fusion of Meta’s mixed-reality efforts and its AI ambitions across its product line. Like any tech event, there are bound to be surprises. Here are the big things to look out for.

    Blurry MetaVision

    The one thing Meta won’t likely be announcing is a very expensive VR headset. It’s a move informed by where the mixed-reality-device market is right now—and whether people actually want to spend big to buy in. Instead, rumors abound about a so-called Meta Quest 3S, a headset which could be a cheaper version of the Meta Quest 3 with lighter features.

    Meta was briefly the bigwig in the AR/VR space 10 years ago when Meta (then Facebook) bought the VR company Oculus. Shortly thereafter, Facebook changed its name to Meta and sank $45 billion into its vision of a digital universe that most people just don’t seem to give much of a damn about. Workplaces aren’t using Meta’s Horizon Workrooms that much—we’re all still on Zoom—and despite the initial bouts of expensive corporate land grabs for digital real estate, users aren’t exactly eager to move into the metaverse.

    Other companies have struggled to find their virtual footing. Apple released its first-mixed reality headset, the $3,500 Apple Vision Pro, in February. Since then, the product has been regarded as a rare misstep for the company, or at least very clearly a first-generation product not intended for the masses. The device didn’t sell very well and was widely criticized as being an expensive, heavy, and ultimately lonely experience. (Apple mentioned the Vision Pro only once, in passing, at its optimistic iPhone announcement event on September 9.)

    Had the Vision Pro’s, well, vision panned out, Meta may have been more inclined to pursue the pricy premium category of VR headset. In August, The Information reported that Meta seems to have abandoned—or at least delayed—plans to reveal an update to its Oculus Quest Pro that would have gone into the ring against Apple’s Vision Pro. Bosworth, Meta’s CTO, responded to that news on Meta’s Threads platform and insisted the move is not that big of a deal, but rather a natural part of the company’s device iterations. Still, it is a move that makes sense in the aftermath of the Apple Vision Pro fizzling out.

    [ad_2]

    Source link

  • Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    Mark Zuckerberg Vows to Be Neutral–While Tossing Gifts to Trump and the GOP

    [ad_1]

    This week Mark Zuckerberg sent a letter to Jim Jordan, the chair of the House Judiciary Committee. For months, the GOP-led committee has been on a crusade to prove that Meta, via its once-eponymous Facebook app, engaged in political sabotage by taking down right-wing content. Its investigation has involved thousands of documents, and the committee interviewed multiple employees, which failed to locate a smoking gun. Now, under the guise of offering his take on the subject, Zuckerberg’s letter is a mea culpa where he seems to indicate that there was something to the GOP conspiracy theory.

    Specifically, he said that in 2021 the Biden administration asked Meta “to censor some Covid-related content.” Meta did take the posts down, and Zuckerberg now regrets the decision. He also conceded that it was wrong to take down some content regarding Hunter Biden’s laptop, which the company did after the FBI warned that the reports might be Russian disinformation.

    What stood out to me, besides the letter’s simpering tone, was how Zuckerberg used the word “censor.” For years the right has been using that word to describe what it regards as Facebook’s systematic suppression of conservative posts. Some state attorneys general have even used that trope to argue that the company’s content should be regulated, and Florida and Texas have passed laws to do just that. Facebook has always contended that the First Amendment is about government suppression, and by definition its content decisions could not be characterized as such. Indeed, the Supreme Court dismissed the lawsuits and blocked the laws.

    Now, by using that term to describe the removal of the Covid material, Zuckerberg seems to be backing down. After years of insisting that, right or wrong, a social media company’s content decisions did not deprive people of First Amendment rights—and in fact said that by making such decisions, the company was invoking its free speech rights—Zuckerberg is now handing its conservative critics just what they wanted.

    I asked Meta spokesperson Andy Stone if the company now agrees with the GOP that some of its decisions to take down content can be referred to as “censoring.” Stone said that Zuckerberg was referring to the government when he used that term. But he also pointed me to Zuckerberg’s affirmation that the ultimate decision to remove the posts was Meta’s own. (Responding to the Zuckerberg letter, the White House said, “When confronted with a deadly pandemic, this Administration encouraged responsible actions to protect public health and safety,” and left the final decision to Facebook.)

    Meta can’t have it both ways, The letter is clear—Zuckerberg said the government pressured Meta to “censor” some Covid content. Meta took that material down. Ergo, Meta now characterizes some of its own actions as censorship. Seizing on this, the GOP members of the Judiciary Committee quickly tweeted that Zuckerberg has now outright admitted “Facebook censored Americans.”

    Stone did say that Meta still does not consider itself a censor. So is Meta disputing that GOP tweet? Stone wouldn’t comment on it. It seems that Meta will offer no pushback while GOP legislators and right-wing commentators crow that Facebook now concedes that it blatantly censored conservatives as a matter of policy.

    Meta’s CEO presented Jordan and the GOP with another gift in his letter, involving his private philanthropy. During the 2020 election, Zuckerberg helped fund nonpartisan initiatives to protect people’s right to vote. Republicans criticized Zuckerberg’s effort as aiding the Democrats. Zuckerberg still insists he wasn’t advocating that people vote a certain way, just ensuring they were free to cast ballots. But, he wrote Jordan, he recognized that some people didn’t believe him. So, apparently to indulge those ill-informed or ill-intentioned critics, he now vows not to fund bipartisan voting efforts during this election cycle. “My goal is to be neutral and not play a role one way or another—or even appear to play a role,” he wrote.

    [ad_2]

    Source link

  • Democrats Have Finally Learned the Value of Shitposting

    Democrats Have Finally Learned the Value of Shitposting

    [ad_1]

    The marked change in tone first appeared in a press release on July 25. Recounting an interview Trump gave to Fox News, the Harris campaign invoked one of its favorite hobby horses, Project 2025, but also said, “Trump is old and quite weird?”

    While the “weird” strategy might be new for Democrats, Republicans have been using it for years under a different name: cringe.

    Since Gamergate, right-wing provocateurs have painted Democrats and liberals as cringe. Go to YouTube and you’ll find countless videos titled something like “SJWS OWNED COMPILATION #2” or “SJW Cringe & Feminist Fails Compilation” with millions upon millions of views. Some creators have built entire digital careers off roasting “cringey” leftists. It’s how the blue-haired liberal stereotype originated and colored conservative views of liberals for years.

    For the first time, the Republicans are on the receiving end of a cringe crusade. It doesn’t help that the former president’s party is now made up of political influencers and partners like LibsofTikTok, random Roman statue avatars, and even “party elder Catturd.” It’s made it nearly impossible for them to escape the weirdo accusations, and it also doesn’t help that in response to the attacks, they’ve just acted even stranger.

    Vance’s personal history isn’t doing him any favors either. Kyle Tharp, who writes the FWIW newsletter (it’s great, go sub!), says. “As an elder millennial, [Vance has] clearly spent a ton of time on these male-dominated right-wing corners of the internet, and so that’s, unfortunately or fortunately, informed a lot of the talking points that he’s gonna deploy.”

    On TikTok and X, the right’s reaction to the switch-up has mostly supplied Democrats with more ammo. And the Harris campaign is taking advantage of it: Since Harris took over the BidenHQ TikTok, the account’s following has quintupled in size, and she’s been able to ride the wave of favorable content without becoming cringe herself. As of right now, the winds are in her favor, but like we’ve seen with Trump and Vance, it may not last forever.

    The Chatroom

    This week, I’ve written a lot on how everyone from Swifties to politicians are using social media to organize and communicate with voters. I like to think I have my finger on the pulse of everything interesting going on in the space, but I have my blind spots. What kind of interesting political organizing have you seen on social media that I should know about?

    Something I haven’t mentioned about Harris’ digital strategy is how they’re organizing community Zoom calls. At this point in the cycle, supporters are at their most enthusiastic, and the campaign wants to capitalize on that energy beyond inspiring TikTok edits. In many of these Zoom calls, Harris campaign staff walk attendees through their phone and text banking systems and point them toward volunteering. It looks like the campaign understands that this excitement won’t last forever, and they’re showing voters how to support them offline as well.



    [ad_2]

    Source link

  • SCOTUS Rules That US Government Can Continue Talking to Social Media Companies

    SCOTUS Rules That US Government Can Continue Talking to Social Media Companies

    [ad_1]

    Today, the Supreme Court ruled in a 6-3 decision that the plaintiffs did not present enough evidence to prove that they had standing to sue over claims that the government violated the First Amendment by communicating with social media companies about misleading and harmful content on their platforms.

    The case was brought by the attorneys general from Louisiana and Missouri, who alleged that government agencies have had undue influence on the content moderation practices of platforms and coerced them into taking down conservative-leaning content, infringing on the First Amendment rights of their citizens. Specifically, the case alleged that government agencies like the Centers for Disease Control (CDC) and Cybersecurity and Infrastructure Security Agency (CISA) coerced social media companies into removing content, including posts that questions the use of masks in preventing Covid-19 and the validity of the 2020 election.

    In a May 2022 statement, Missouri attorney general Eric Schmitt alleged that members of the Biden administration “colluded with social media companies like Meta, Twitter, and YouTube to remove truthful information related to the lab-leak theory, the efficacy of masks, election integrity, and more.” Last year, a federal judge issued an injunction that barred the government from communicating with social media platforms.

    Today, the court said that the plaintiffs could not prove that communications between the Biden administration and social media companies resulted in “direct censorship injuries.” In the majority opinion for Murthy v. Missouri, Justice Amy Coney Barrett wrote that, “the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment.”

    While it is the government’s responsibility to make sure that it interacts with platforms in ways that don’t violate free speech—or what’s known as “jawboning”—Kate Ruane, director of the free expression project at the Center for Democracy and Technology, says that there are very valid reasons why government agencies might need to communicate with platforms.

    “Communication between the government, social media platforms and government entities, is critical in providing information that social media companies can use to ensure social media users have authoritative information about where you’re supposed to go to vote, or what to do in an emergency, or like all of those things,” she says. “It is very useful for the government to have partnerships with social media to get that accurate information out there.”

    David Greene, civil liberties director at the Electronic Frontier Foundation, says that the court’s decision earlier this cycle on a case called Vullo v. National Rifle Association was likely a clear indicator for how it would approach the Murthy decision. In the Vullo case, the NRA alleged that New York Department of Financial Services Maria Vullo pressured banks and insurance companies not to do business with the NRA, and suppressed the organization’s advocacy. In a 9-0 decision, the court ruled that the NRA had presented enough evidence that a case against Vullo could move forward. In Murthy, however, the justices found that the plaintiffs had not presented enough evidence to show that the government had pressured platforms into making content moderation decisions.

    “Other than that the facts involved are sort of politically motivated, the legal issue itself is not something that I think traditionally breaks down along partisan lines,” says Greene.

    But Greene says that without clear guidelines, state, local, and federal government bodies—of all political leanings—could feel freer to contact platforms now. “We will see a lot more of that type of government involvement in these processes,” he says.

    [ad_2]

    Source link

  • My Memories Are Just Meta’s Training Data Now

    My Memories Are Just Meta’s Training Data Now

    [ad_1]

    In R. C. Sherriff’s novel The Hopkins Manuscript, readers are transported to a world 800 years after a cataclysmic event ended Western civilization. In pursuit of clues about a blank spot in their planet’s history, scientists belonging to a new world order discover diary entries in a swamp-infested wasteland formerly known as England. For the inhabitants of this new empire, it is only through this record of a retired school teacher’s humdrum rural life, his petty vanities and attempts to breed prize-winning chickens, that they begin to learn about 20th-century Britain.

    If I were to teach futuristic beings about life on earth, I once believed I could produce a time capsule more profound than Sherriff’s small-minded protagonist, Edgar Hopkins. But scrolling through my decade-old Facebook posts this week, I was presented with the possibility that my legacy may be even more drab.

    Earlier this month, Meta announced that my teenage status updates were exactly the kind of content it wants to pass on to future generations of artificial intelligence. From June 26, old public posts, holiday photos, and even the names of millions of Facebook and Instagram users around the world would effectively be treated as a time capsule of humanity and transformed into training data.

    That means my mundane posts about university essay deadlines (“3 energy drinks down 1,000 words to go”) as well as unremarkable holiday snaps (one captures me slumped over my phone on a stationary ferry) are about to become part of that corpus. The fact that these memories are so dull, and also very personal, makes Meta’s interest more unsettling.

    The company says it is only interested in content that is already public: private messages, posts shared exclusively with friends, and Instagram Stories are out of bounds. Despite that, AI is suddenly feasting on personal artifacts that have, for years, been gathering dust in unvisited corners of the internet. For those reading from outside Europe, the deed is already done. The deadline announced by Meta applied only to Europeans. The posts of American Facebook and Instagram users have been training Meta AI models since 2023, according to company spokesperson Matthew Pollard.

    Meta is not the only company turning my online history into AI fodder. WIRED’s Reece Rogers recently discovered that Google’s AI search feature was copying his journalism. But finding out which personal remnants exactly are feeding future chatbots was not easy. Some sites I’ve contributed to over the years are hard to trace. Early social network Myspace was acquired by Time Inc. in 2016, which in turn was acquired by a company called Meredith Corporation two years later. When I asked Meredith about my old account, they replied that Myspace had since been spun off to an advertising firm, Viant Technology. An email to a company contact listed on its website was returned with a message that the address “couldn’t be found.”

    Asking companies still in business about my old accounts was more straightforward. Blogging platform Tumblr, owned by WordPress owner Automattic, said unless I’d opted out, the public posts I made as a teenager will be shared with “a small network of content and research partners, including those that train AI models” per a February announcement. YahooMail, which I used for years, told me that a sample of old emails—which have apparently been “anonymized” and “aggregated”—are being “utilized” by an AI model internally to do things like summarize messages. Microsoft-owned LinkedIn also said my public posts were being used to train AI although some “personal” details included in those posts were excluded, according to a company spokesperson, who did not specify what those personal details were.

    [ad_2]

    Source link

  • Orkut’s Founder Is Still Dreaming of a Social Media Utopia

    Orkut’s Founder Is Still Dreaming of a Social Media Utopia

    [ad_1]

    Before Orkut launched in January 2004, Büyükkökten warned the team that the platform he’d built it on could handle only 200,000 users. It wouldn’t be able to scale. “They said, let’s just launch and see what happens,” he explains. The rest is online history. “It grew so fast. Before we knew it, we had millions of users,” he says.

    Orkut featured a digital Scrapbook and the ability to give people compliments (ranging from “trustworthy” to “sexy”), create communities, and curate your very own Crush List. “It reflected all of my personality traits. You could flatter people by saying how cool they were, but you could never say something negative about them,” he says.

    At first, Orkut was popular in the US and Japan. But, as predicted, server issues severed its connection to its users. “We started having a lot of scalability issues and infrastructure problems,” Büyükkökten says. They were forced to rewrite the entire platform using C++, Java, and Google’s tools. The process took an entire year, and scores of original users dropped off due to sluggish speeds and one-too-many encounters with Orkut’s now-nostalgic “Bad, bad server, no donut for you” error message.

    Around this time, though, the site became incredibly popular in Finland. Büyükkökten was bemused. “I couldn’t figure it out until I spoke to a friend who speaks Finnish. And he said: ‘Do you know what your name means?’ I didn’t. He told me that orkut means multiple orgasms.” Come again? “Yes, so in Finland, everyone thought they were signing up to an adult site. But then they would leave straight after as we couldn’t satisfy them,” he laughs.

    Awkward double meanings aside, Orkut continued to spread across the world. In addition to exploding in Estonia, the platform went mega in India. Its true second home, though, was Brazil. “It became a huge success. A lot of people think I’m Brazilian because of this,” Büyükkökten explains. He has a theory about why Brazil went nuts for Orkut. “Brazil’s culture is very welcoming and friendly. It’s all about friendships and they care about connections. They’re also very early adopters of technology,” he says. At its peak, 11 million of Brazil’s 14 million internet users were on Orkut, most logging on through cybercafes. It took Facebook seven years to catch up.
    But Orkut wasn’t without its problems (and many fake profiles). The site was banned in Iran and the United Arab Emirates. Government authorities in Brazil and India had concerns about drug-related content and child pornography, something Büyükkökten denies existed on Orkut. Brazilians coined the word orkutização to describe a social media site like Orkut becoming less cool after going mainstream. In 2014, having hemorrhaged users due to slow server speeds, Facebook’s more intuitive interface, and issues surrounding privacy, Orkut went offline. “Vic Gundotra, in charge of Google+, decided against having any competing social products,” Büyükkökten explains.

    But Büyükkökten has fond memories. “We had so many stories of people falling in love and moving in together from different parts of the world. I have a friend in Canada who met his wife in Brazil through Orkut, a friend in New York who met his wife in Estonia and now they’re married with two kids.” he says. It also provided a platform for minority communities. “I was talking to a gay journalist from a small town in São Paulo who told me that finding all these LGBTQ people on Orkut transformed his life,” he adds.

    Büyükkökten left Google in 2014 and founded a new social network, again featuring a simple five-letter title: Hello. He wanted to focus on positive connection. It used “loves” rather than likes, and users could choose from more than 100 personae, ranging from Cricket Fan to Fashion Enthusiast, and then were connected to like-minded people with common interests. Soft-launched in Brazil in 2018 with 2 million users, Hello enjoyed “ultra-high engagement” that Büyükkökten claims surpassed the likes of Instagram and Twitter. “One of the things that stood out in our user surveys was that people said when they open Hello, it makes them happy.”

    The app was downloaded more than 2 million times—a fraction of the users Orkut enjoyed—but Büyükkökten is proud of it. “It surpassed all our dreams. There were numerous instances where our K-Factor (the number of new people that existing users bring to an app) reached 3, leading us to exponential growth,” he says. But, in 2020, Büyükkökten bid goodbye to Hello.
    Now he’s working on a new platform. “It’ll leverage AI and machine learning to optimize for improving happiness, bringing people together, fostering communities, empowering users, and creating a better society,” he says. “Connection will be the cornerstone of design, interaction, product, and experience.” And the name? “If I told you the new brand, you would have an aha moment and everything would be crystal clear,” he says.

    Once again, it’s driven by his enduring desire to connect people. “One of the biggest ills of society is the decline in social capital. After smartphones and the pandemic, we have stopped hanging out with our friends and don’t know our neighbors. We have a loneliness epidemic,” he says.
    He is fiercely critical of current platforms. “My biggest passion in life is connecting people through technology. But when was the last time you met someone on social media? It’s creating shame, pessimism, division, depression, and anxiety,” he says. For Büyükkökten, optimism is more important than optimization. “These companies have engineered the algorithm for revenue,” he says. “But it’s been awful for mental health. The world is terrifying right now and a lot of that has come through social media. There’s so much hate,” he says.

    Instead, he wants social media to be a place of love and a facilitator for meeting new people in person. But why will it work this time around? “That’s a really good question,” he says. “One thing that has been really consistent is that people miss Orkut right now.” It’s true—Brazilian social media has recently been abuzz with memes and memories to celebrate the site’s 20th birthday. “A teenage boy even recently drove 10 hours to meet me at a conference to talk about Orkut. And I was like, how is that even possible?” he laughs. Orkut’s landing page is still live, featuring an open letter calling for a social media utopia.

    This, along with our collective desire for a more human social media, is what makes Büyükkökten believe that his next platform is one that will truly stick around. Has he decided on that all important name? “We haven’t announced it yet. But I’m really excited. I truly care. I want to bring that authenticity and sense of belonging back,” he concludes. Perhaps, as his Finnish fans would joke, it’s time for Orkut’s second coming.

    This story first appeared in the July/August 2024 UK edition of WIRED magazine.

    [ad_2]

    Source link

  • A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    [ad_1]

    Allen, a data scientist, and Massachi, a software engineer, worked for nearly four years at Facebook on some of the uglier aspects of social media, combating scams and election meddling. They didn’t know each other but both quit in 2019, frustrated at feeling a lack of support from executives. “The work that teams like the one I was on, civic integrity, was being squandered,” Massachi said in a recent conference talk. “Worse than a crime, it was a mistake.”

    Massachi first conceived the idea of using expertise like that he’d developed at Facebook to drive greater public attention to the dangers of social platforms. He launched the nonprofit Integrity Institute with Allen in late 2021, after a former colleague connected them. The timing was perfect: Frances Haugen, another former Facebook employee, had just leaked a trove of company documents, catalyzing new government hearings in the US and elsewhere about problems with social media. It joined a new class of tech nonprofits such as the Center for Humane Technology and All Tech Is Human, started by people working in industry trenches who wanted to become public advocates.

    Massachi and Allen infused their nonprofit, initially bankrolled by Allen, with tech startup culture. Early staff with backgrounds in tech, politics, or philanthropy didn’t make much, sacrificing pay for the greater good as they quickly produced a series of detailed how-to guides for tech companies on topics such as preventing election interference. Major tech philanthropy donors collectively committed a few million dollars in funding, including the Knight, Packard, MacArthur, and Hewlett foundations, as well as the Omidyar Network. Through a university-led consortium, the institute got paid to provide tech policy advice to the European Union. And the organization went on to collaborate with news outlets, including WIRED, to investigate problems on tech platforms.

    To expand its capacity beyond its small staff, the institute assembled an external network of two dozen founding experts it could tap for advice or research help. The network of so-called institute “members” grew rapidly to include 450 people from around the world in the following years. It became a hub for tech workers ejected during tech platforms’ sweeping layoffs, which significantly reduced trust and safety, or integrity, roles that oversee content moderation and policy at companies such as Meta and X. Those who joined the institute’s network, which is free but involves passing a screening, gained access to part of its Slack community where they could talk shop and share job opportunities.

    Major tensions began to build inside the institute in March last year, when Massachi unveiled an internal document on Slack titled “How We Work” that barred use of terms including “solidarity,” “radical,” and “free market,” which he said come off as partisan and edgy. He also encouraged avoiding the term BIPOC, an acronym for “Black, Indigenous, and people of color,” which he described as coming from the “activist space.” His manifesto seemed to echo the workplace principles that cryptocurrency exchange Coinbase had published in 2020, which barred discussions of politics and social issues not core to the company, drawing condemnation from some other tech workers and executives.

    “We are an internationally-focused open-source project. We are not a US-based liberal nonprofit. Act accordingly,” Massachi wrote, calling for staff to take “excellent actions” and use “old-fashioned words.” At least a couple of staffers took offense, viewing the rules as backward and unnecessary. An institution devoted to taming the thorny challenge of moderating speech now had to grapple with those same issues at home.

    [ad_2]

    Source link