Tag: startups

  • An AI Bot Named James Has My Old Local News Job

    An AI Bot Named James Has My Old Local News Job

    [ad_1]

    It always seemed difficult for the newspaper where I used to work, The Garden Island on the rural Hawaiian island of Kauai, to hire reporters. If someone left, it could take months before we hired a replacement, if we ever did.

    So, last Thursday, I was happy to see that the paper appeared to have hired two new journalists—even if they seemed a little off. In a spacious studio overlooking a tropical beach, James, a middle-aged Asian man who appears to be unable to blink, and Rose, a younger redhead who struggles to pronounce words like “Hanalei” and “TV,” presented their first news broadcast, over pulsing music that reminds me of the Challengers score. There is something deeply off-putting about their performance: James’ hands can’t stop vibrating. Rose’s mouth doesn’t always line up with the words she’s saying.

    When James asks Rose about the implications of a strike on local hotels, Rose just lists hotels where the strike is taking place. A story on apartment fires “serves as a reminder of the importance of fire safety measures,” James says, without naming any of them.

    James and Rose are, you may have noticed, not human reporters. They are AI avatars crafted by an Israeli company named Caledo, which hopes to bring this tech to hundreds of local newspapers in the coming year.

    “Just watching someone read an article is boring,” says Dina Shatner, who cofounded Caledo with her husband Moti in 2023. “But watching people talking about a subject—this is engaging.”

    The Caledo platform can analyze several prewritten news articles and turn them into a “live broadcast” featuring conversation between AI hosts like James and Rose, Shatner says. While other companies, like Channel 1 in Los Angeles, have begun using AI avatars to read out prewritten articles, this claims to be the first platform that lets the hosts riff with one another. The idea is that the tech can give small local newsrooms the opportunity to create live broadcasts that they otherwise couldn’t. This can open up embedded advertising opportunities and draw in new customers, especially among younger people who are more likely to watch videos than read articles.

    Instagram comments under the broadcasts, which have each garnered between 1,000 and 3,000 views, have been pretty scathing. “This ain’t that,” says one. “Keep journalism local.” Another just reads: “Nightmares.”

    When Caledo started seeking out North American partners earlier this year, Shatner says, The Garden Island was quick to apply, becoming the first outlet in the country to adopt the AI broadcast tech.

    I’m surprised to hear this, because when I worked as a reporter there last year, the paper wasn’t exactly cutting edge—we had a rather clunky website—and appeared to me to not be in a financial position to be making this sort of investment. As the newspaper industry struggled with advertising revenue decline, the oldest and currently the only daily print newspaper on Kauai, The Garden Island, had shrunk to only a couple reporters listed on its website, tasked with covering every story on an island of 73,000. In recent decades, the paper has been passed around between several large media conglomerates—including earlier this year, when its parent company Oahu Publications’ parent company, Black Press Media, was purchased by Carpenter Media Group, which now controls more than 100 local outlets throughout North America.

    [ad_2]

    Source link

  • An Underwater Data Center in San Francisco Bay? Regulators Say Not So Fast

    An Underwater Data Center in San Francisco Bay? Regulators Say Not So Fast

    [ad_1]

    NetworkOcean isn’t alone in its ambitions. Founded in 2021, US-based Subsea Cloud operates about 13,500 computer servers in unspecified underwater locations in Southeast Asia to serve clients in AI and gaming, says the startup’s founder and CEO, Maxie Reynolds. “It’s a nascent market,” she says. “But it’s currently the only one that can handle the current and projected loads in a sustainable way.”

    Subsea secured a permit for each site and uses remotely operated robots for maintenance, according to Reynolds. It plans to fire up its first underwater GPUs next year and also is considering private sites, which Reynolds says would ease permitting complexity. Subsea claims it isn’t significantly increasing water temperature, though it hasn’t published independent reviews.

    NetworkOcean also believes it will cause negligible heating. “Our modeling shows a 2-degree Fahrenheit change over an 8-square-fot area, or a 0.004-degree Fahrenheit change over the surface of the body” of water, Mendel says. He draws confidence from Microsoft’s finding that water a few meters downstream from its testing warmed only slightly.

    Protected Bay

    Bay Area projects can increase water temperatures by no more than 4 degrees Fahrenheit at any time or place, according to Mumley, the ex-water board official. But two biologists who spoke to WIRED say any increase is concerning to them because it can incubate harmful algae and attract invasive species.

    Shaolei Ren, a University of California, Riverside, associate professor of electrical and computer engineering who’s studying the environmental impact of AI, compares plans for an underwater data center of NetworkOcean’s announced capacity, when running fully utilized, to operating about 300 bedroom space heaters. (Mendel disputes the concern, citing Project Natick’s apparently minimal impact.) A few years ago, a project that proposed using San Francisco Bay water to cool a data center on land failed to win approval after public concerns were voiced, including about temperatures.

    The San Francisco Bay is on average around a dozen feet deep, with salty Pacific Ocean water flowing in from under the Golden Gate Bridge mixing with fresh runoff from a huge swath of Northern California. Experts say it isn’t clear whether any location in the expanse would be suitable for more than a tiny demonstration between its muddy, shallow, salty, and turbulent parts.

    Further, securing permits could require proving to at least nine regulatory bodies and several critical nonprofits that a data center would be worthwhile, according to spokespeople for the agencies and five experts in the bay’s politics. For instance, under the law administered by the Conservation and Development Commission, a project’s public benefit must “clearly exceed” the detriment, and developers must show there’s no suitable location on land.

    Other agencies consider waste emissions and harm to the region’s handful of endangered fish and birds (including the infamous delta smelt). Even a temporary project requires signoff from the US Army Corps of Engineers, which reviews obstruction to ship and boat traffic, and the water board. “For example, temporarily placing a large structure in an eelgrass bed could have lingering effects on the eelgrass, which is a critical habitat for certain fish,” the water board’s Lichten says.

    NetworkOcean’s Kim tells WIRED that the company is cognizant of the concerns and is avoiding sensitive habitats. His cofounder Mendel says that they did contact one of the region’s regulators. In March, NetworkOcean spoke to an unspecified US Coast Guard representative about testing at the bottom of the bay and pumping in seawater as a coolant. The company later shifted to the current near-surface plans that don’t involve pumping. (A Coast Guard spokesperson declined to comment without more clarity on whom NetworkOcean allegedly contacted.)

    For permanent installations, Kim and Mendel say they are eyeing other US and overseas locations, which they declined to name, and that they are engaging with the relevant regulators.

    Mendel insists the “SF Bay” test announced last month will move forward—and soon. “We’re still building the vessel,” he says. A community of marine scientists will be keeping their thermometers close.

    [ad_2]

    Source link

  • Europe Scrambles for Relevance in the Age of AI

    Europe Scrambles for Relevance in the Age of AI

    [ad_1]

    That concentration of power is uncomfortable for European governments. It makes European companies downstream customers of the future, importing the latest services and technology in exchange for money and data sent westward across the Atlantic. And these concerns have taken on a new urgency—partly because some in Brussels perceive a growing gap in values and beliefs between Silicon Valley and the median EU citizen and their elected representatives; and partly because AI looms large in the collective imagination as the engine of the next technological revolution.

    European fears of lagging in AI predate ChatGPT. In 2018, the European Commission issued an AI plan calling for “AI made in Europe” that could compete with the US and China. But beyond a desire for some kind of control over the shape of technology, the operational definition of AI sovereignty has become pretty fuzzy. “For some people, it means we need to get our act together to fight back against Big Tech,” Daniel Mügge, professor of political arithmetic at the University of Amsterdam, who studies technology policy in the EU, says. “To others, it means there’s nothing wrong with Big Tech, as long as it’s European, so let’s get cracking and make it happen.”

    Those competing priorities have begun to complicate EU regulation. The bloc’s AI Act, which passed the European Parliament in March and is likely to become law this summer, has a heavy focus on regulating potential harms and privacy concerns around the technology. However, some member states, notably France, made clear during negotiations over the law that they fear regulation could shackle their emerging AI companies, which they hope will become European alternatives to OpenAI.

    Speaking before last November’s UK summit on AI safety, French finance minister Bruno Le Maire said that Europe needed to “innovate before it regulates” and that the continent needed “European actors mastering AI.” The AI Act’s final text includes a commitment to making the EU “a leader in the uptake of trustworthy AI.”

    “The Italians and the Germans and the French at the last minute thought: ‘Well, we need to cut European companies some slack on foundation models,’” Mügge says. “That is wrapped up in this idea that Europe needs European AI. Since then, I feel that people have realized that this is a little bit more difficult than they would like.”

    Sarlin, who has been on a tour of European capitals recently, including meeting with policymakers in Brussels, says that Europe does have some of the elements it needs to compete. To be a player in AI, you have to have data, computing power, talent, and capital, he says.

    Data is fairly widely available, Sarlin adds, and Europe has AI talent, although it sometimes struggles to retain it.

    To marshal more computing power, the EU is investing in high-performance computing resources, building a pan-European network of high-performance computing facilities, and offering startups access to supercomputers via its “AI Factories” initiative.

    Accessing the capital needed to build big AI projects and companies is also challenging, with a wide gulf between the US and everyone else. According to Stanford University’s AI Index report, private investment in US AI companies topped $67 billion in 2023, more than 35 times the amount invested in Germany or France. Research from Accel Partners shows that in 2023, the seven largest private investment rounds by US generative AI companies totaled $14 billion. The top seven in Europe totaled less than $1 billion.

    [ad_2]

    Source link

  • OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

    OpenAI-Backed Nonprofits Have Gone Back on Their Transparency Pledges

    [ad_1]

    Neither database mandates nor generally contains up-to-date versions of the records that UBI Charitable and OpenResearch had said they provided in the past.

    The original YC Research conflict-of-interest policy that Das did share calls for company insiders to be upfront about transactions in which their impartiality could be questioned and for the board to decide how to proceed.

    Das says the policy “may have been amended since OpenResearch’s policies changed (including when the name was changed from YC Research), but the core elements remain the same.”

    No Website

    UBI Charitable launched in 2020 with $10 million donated from OpenAI, as first reported by TechCrunch last year. UBI Charitable’s aim, according to its government filings, is putting the over $31 million it received by the end of 2022 to support initiatives that try to offset “the societal impacts” of new technologies and ensure no one is left behind. It has donated largely to CitySquare in Dallas and Heartland Alliance in Chicago, both of which work on a range of projects to fight poverty.

    UBI Charitable doesn’t appear to have a website but shares a San Francisco address with OpenResearch and OpenAI, and OpenAI staff have been listed on UBI Charitable’s government paperwork. Its three Form 990 filings since launching all state that records including governing documents, financial statements, and a conflict-of-interest policy were available upon request.

    Rick Cohen, chief operating and communications officer for National Council of Nonprofits, an advocacy group, says “available upon request” is a standard answer plugged in by accounting firms. OpenAI, OpenResearch, and UBI Charitable have always shared the same San Francisco accounting firm, Fontanello Duffield & Otake, which didn’t respond to a request for comment.

    Miscommunication or poor oversight could lead to the standard answer about access to records getting submitted, “even if the organization wasn’t intending to make them available,” Cohen says.

    The disclosure question ended up on what’s known as the Form 990 as part of an effort in 2008 to help the increasingly complex world of nonprofits showcase their adherence to governance best practices, at least as implied by the IRS, says Kevin Doyle, senior director of finance and accountability at Charity Navigator, which evaluates nonprofits to help guide donors’ giving decisions. “Having that sort of transparency story is a way to indicate to donors that their money is going to be used responsibly,” Doyle says.

    OpenResearch solicits donations on its website, and UBI Charitable stated on its most recent IRS filing that it had received over $27 million in public support. Doyle says Charity Navigator’s data show donations tend to flow to organizations it rates higher, with transparency among the measured factors.

    It’s certainly not unheard of for organizations to share a wide range of records. Charity Navigator has found that most of the roughly 900 largest US nonprofits reliant on individual donors publish financial statements on their websites. It doesn’t track disclosure of bylaws or conflict-of-interest policies.

    Charity Navigator publishes its own audited financial statements and at least eight nonstandard policies it maintains, including ones on how long it retains documents, how it treats whistleblower complaints, and which gifts staff can accept. “Donors can look into what we’re doing and make their own judgment rather than us operating as a black box, saying, ‘Please give us money, but don’t ask any questions,’” Doyle says.

    Cohen of the National Council of Nonprofits cautions that over-disclosure could create vulnerabilities. Posting a disaster-recovery plan, for example, could offer a roadmap to computer hackers. He adds that just because organizations have a policy on paper doesn’t mean they follow it. But knowing what they were supposed to do to evaluate a potential conflict of interest could still allow for more public accountability than otherwise possible, and if AI could be as consequential as Altman envisions, the scrutiny may very well be needed.

    [ad_2]

    Source link

  • I Spent a Week Eating Discarded Restaurant Food. But Was It Really Going to Waste?

    I Spent a Week Eating Discarded Restaurant Food. But Was It Really Going to Waste?

    [ad_1]

    It’s 10 pm on a Wednesday night and I’m standing in Blessed, a south London takeaway joint, half-listening to a fellow customer talking earnestly about Jesus. I’m nodding along, trying to pay attention as reggae reverberates around the small yellow shop front. But really, all I can really think about is: What’s in the bag?

    Today’s bag is blue plastic. A smiling man passes it over the counter. Only once I extricate myself from the religious lecture and get home do I discover what’s inside: Caribbean saltfish, white rice, vegetables, and a cup of thick, brown porridge.

    All week, I’ve lived off mysterious packages like this one, handed over by cafés, takeaways, and restaurants across London. Inside is food once destined for the bin. Instead, I’ve rescued it using Too Good To Go, a Danish app that is surging in popularity, selling over 120 million meals last year and expanding fast in the US. For five days, I decided to divert my weekly food budget to eat exclusively through the app, paying between £3 and £6 (about $4 to $8) for meals that range from a handful of cakes to a giant box of groceries, in an attempt to understand what a tech company can teach me about food waste in my own city.

    Users who open the TGTG app are presented with a list of establishments that either have food going spare right now or expect to in the near future. Provided is a brief description of the restaurant, a price, and a time slot. Users pay through the app, but this is not a delivery service. Surprise bags—customers have only a vague idea of what’s inside before they buy—have to be collected in person.

    I start my experiment at 9:30 on a Monday morning, in the glistening lobby of the Novotel Hotel, steps away from the River Thames. Of all the breakfast options available the night before, this was the most convenient—en route to my office and offering a pickup slot that means I can make my 10 am meeting. When I say I’m here for TGTG, a suited receptionist nods and gestures toward the breakfast buffet. This branch of the Novotel is a £200-a-night hotel, yet staff do not seem begrudging of the £4.50 entry fee I paid in exchange for leftover breakfast. A homeless charity tells me its clients like the app for precisely that reason; cheap food, without the stigma. A server politely hands over my white-plastic surprise bag with two polystyrene boxes inside, as if I am any other guest.

    I open the boxes in my office. One is filled with mini pastries, while the other is overflowing with Full English. Two fried eggs sit atop a mountain of scrambled eggs. Four sausages jostle for space with a crowd of mushrooms. I diligently start eating—a bite of cold fried egg, a mouthful of mushrooms, all four sausages. I finish with a croissant. This is enough to make me feel intensely full, verging on sick, so I donate the croissants to the office kitchen and tip the rest into the bin. This feels like a disappointing start. I am supposed to be rescuing waste food, not throwing it away.

    Over the next two days, I live like a forager in my city, molding my days around pickups. I walk and cycle to cafés, restaurants, markets, supermarkets; to familiar haunts and places I’ve never noticed. Some surprise bags last for only one meal, others can be stretched out for days. On Tuesday morning, my £3.59 surprise bag includes a small cake and a slightly stale sourdough loaf, which provides breakfast for three more days. When I go back to the same café the following week, without using the app, the loaf alone costs £6.95.

    TGTG was founded in Copenhagen in 2015 by a group of Danish entrepreneurs who were irked by how much food was wasted by all-you-can-eat buffets. Their idea to repurpose that waste quickly took off, and the app’s remit expanded to include restaurants and supermarkets. A year after the company was founded, Mette Lykke was sitting on a bus when a woman showed her the app and how it worked. She was so impressed, she reached out to the company to ask if she could help. Lykke has now been CEO for six years.

    “I just hate wasting resources,” she says. “It was just this win-win-win concept.” To her, the restaurants win because they get paid for food they would have otherwise thrown away; the customer wins because they get a good deal while simultaneously discovering new places; and the environment wins because, she says, food waste contributes 10 percent of our global greenhouse gas emissions. When thrown-away food rots in a landfill, it releases methane into the atmosphere—with homes and restaurants the two largest contributors.

    But the app doesn’t leave me with the impression I’m saving the planet. Instead, I feel more like I’m on a daily treasure hunt for discounted food. On Wednesday, TGTG leads me to a railway arch which functions as a depot for the grocery delivery app Gorillas. Before I’ve even uttered the words “Too Good To Go,” a teenager with an overgrown fringe emerges silently from the alleys of shelving units with this evening’s bag: groceries, many still days away from expiring, that suspiciously add up to create an entire meal for two people. For £5.50, I receive fresh pasta, pesto, cream, bacon, leeks, and a bag of stir-fry vegetables, which my husband merges into a single (delicious) pasta dish. It feels too convenient to be genuine waste. Perhaps Gorillas is attempting to convert me into its own customer? When I ask its parent company, Getir, how selling food well in date helps combat food waste, the company does not reply to my email.

    I am still thinking about my Gorillas experience at lunchtime on Thursday as I follow the app’s directions to the Wowshee falafel market stall, where 14 others are already queuing down the street. A few casual conversations later, I realize I am one of at least four TGTG users in the line. Seeing so many of us in one place again makes me wonder if restaurants are just using the app as a form of advertising. But Wowshee owner Ahmed El Shimi describes the marketing benefits as only a “little bonus.” For him, the app’s main draw is it helps cut down waste. “We get to sell the product that we were going to throw away anyway,” he says. “And it saves the environment at the same time.” El Shimi, who says he sells around 20 surprise bags per day, estimates using TGTG reduces the amount of food the stall wastes by around 60 percent. When I pay £5 for two portions of falafel—which lasts for lunch and dinner—the business receives £3.75 before tax, El Shimi says. “It’s not much, but it’s better than nothing.”

    On Friday, my final day of the experiment, everything falls apart. I sleep badly and wake up late. The loaf from earlier in the week is rock solid. I eat several mini apple pies for breakfast, which were part of a generous £3.09 Morrisons supermarket haul the night before. Browsing the app, nothing appeals to me, and even if it did I’m too tired to face leaving the house to collect it. After four days of eating nothing but waste food, I crack and seek solace in familiar ingredients buried in my cupboard: two fried eggs on my favorite brand of seeded brown bread.

    TGTG is not a solution for convenience. For me, the app is an answer for office lunch malaise. It pulled me out of my lazy routine while helping me eat well—in central London—for a £5 budget. In the queue for falafel, I met a fellow app user who told me how, before she discovered the app, she would eat the same sandwich from the same supermarket for lunch every day. For people without access to a kitchen, it offers a connection to an underworld of hot food going spare.

    [ad_2]

    Source link

  • Marc Andreessen Once Called Online Safety Teams an Enemy. He Still Wants Walled Gardens for Kids

    Marc Andreessen Once Called Online Safety Teams an Enemy. He Still Wants Walled Gardens for Kids

    [ad_1]

    In his polarizing “Techno-Optimist Manifesto” last year, venture capitalist Marc Andreessen listed a number of enemies to technological progress. Among them were “tech ethics” and “trust and safety,” a term used for work on online content moderation, which he said had been used to subject humanity to “a mass demoralization campaign” against new technologies such as artificial intelligence.

    Andreessen’s declaration drew both public and quiet criticism from people working in those fields—including at Meta, where Andreessen is a board member. Critics saw his screed as misrepresenting their work to keep internet services safer.

    On Wednesday, Andreessen offered some clarification: When it comes to his 9-year-old son’s online life, he’s in favor of guardrails. “I want him to be able to sign up for internet services, and I want him to have like a Disneyland experience,” the investor said in an onstage conversation at a conference for Stanford University’s Human-Centered AI research institute. “I love the internet free-for-all. Someday, he’s also going to love the internet free-for-all, but I want him to have walled gardens.”

    Contrary to how his manifesto may have read, Andreessen went on to say he welcomes tech companies—and by extension their trust and safety teams—setting and enforcing rules for the type of content allowed on their services.

    “There’s a lot of latitude company by company to be able to decide this,” he said. “Disney imposes different behavioral codes in Disneyland than what happens in the streets of Orlando.” Andreessen alluded to how tech companies can face government penalties for allowing child sexual abuse imagery and certain other types of content, so they can’t be without trust and safety teams altogether.

    So what kind of content moderation does Andreessen consider an enemy of progress? He explained that he fears two or three companies dominating cyberspace and becoming “conjoined” with the government in a way that makes certain restrictions universal, causing what he called “potent societal consequences” without specifying what those might be. “If you end up in an environment where there is pervasive censorship, pervasive controls, then you have a real problem,” Andreessen said.

    The solution as he described it is ensuring competition in the tech industry and a diversity of approaches to content moderation, with some having greater restrictions on speech and actions than others. “What happens on these platforms really matters,” he said. “What happens in these systems really matters. What happens in these companies really matters.”

    Andreessen didn’t bring up X, the social platform run by Elon Musk and formerly known as Twitter, in which his firm Andreessen Horowitz invested when the Tesla CEO took over in late 2022. Musk soon laid off much of the company’s trust and safety staff, shut down Twitter’s AI ethics team, relaxed content rules, and reinstated users who had previously been permanently banned.

    Those changes paired with Andreessen’s investment and manifesto created some perception that the investor wanted few limits on free expression. His clarifying comments were part of a conversation with Fei-Fei Li, codirector of Stanford’s HAI, titled “Removing Impediments to a Robust AI Innovative Ecosystem.”

    During the session, Andreessen also repeated arguments he has made over the past year that slowing down development of AI through regulations or other measures recommended by some AI safety advocates would repeat what he sees as the mistaken US retrenchment from investment in nuclear energy several decades ago.

    Nuclear power would be a “silver bullet” to many of today’s concerns about carbon emissions from other electricity sources, Andreessen said. Instead the US pulled back, and climate change hasn’t been contained the way it could have been. “It’s an overwhelmingly negative, risk-aversion frame,” he said. “The presumption in the discussion is, if there are potential harms therefore there should be regulations, controls, limitations, pauses, stops, freezes.”

    For similar reasons, Andreessen said, he wants to see greater government investment in AI infrastructure and research and a freer rein given to AI experimentation by, for instance, not restricting open-source AI models in the name of security. If he wants his son to have the Disneyland experience of AI, some rules, whether from governments or trust and safety teams, may be necessary too.

    [ad_2]

    Source link

  • OpenAI Employees Warn of a Culture of Risk and Retaliation

    OpenAI Employees Warn of a Culture of Risk and Retaliation

    [ad_1]

    A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities.

    “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” reads the letter published at righttowarn.ai. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable.”

    The letter calls for not just OpenAI but all AI companies to commit to not punishing employees who speak out about their activities. It also calls for companies to establish “verifiable” ways for workers to provide anonymous feedback on their activities. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the letter reads. “Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.”

    OpenAI came under criticism last month after a Vox article revealed that the company has threatened to claw back employees’ equity if they do not sign non-disparagement agreements that forbid them from criticizing the company or even mentioning the existence of such an agreement. OpenAI’s CEO, Sam Altman, said on X recently that he was unaware of such arrangements and the company had never clawed back anyone’s equity. Altman also said the clause would be removed, freeing employees to speak out. OpenAI did not respond to a request for comment by time of posting.

    OpenAI has also recently changed its approach to managing safety. Last month, an OpenAI research group responsible for assessing and countering the long-term risks posed by the company’s more powerful AI models was effectively dissolved after several prominent figures left and the remaining members of the team were absorbed into other groups. A few weeks later, the company announced that it had created a Safety and Security Committee, led by Altman and other board members.

    Last November, Altman was fired by OpenAI’s board for allegedly failing to disclose information and deliberately misleading them. After a very public tussle, Altman returned to the company and most of the board was ousted.

    The letters’ signatories include people who worked on safety and governance at OpenAI, current employees who signed anonymously, and researchers who currently work at rival AI companies. It was also endorsed by several big-name AI researchers including Geoffrey Hinton and Yoshua Bengio, who both won the Turing Award for pioneering AI research, and Stuart Russell, a leading expert on AI safety.

    Former employees to have signed the letter include William Saunders, Carroll Wainwright, and Daniel Ziegler, all of whom worked on AI safety at OpenAI.

    “The public at large is currently underestimating the pace at which this technology is developing,” says Jacob Hilton, a researcher who previously worked on reinforcement learning at OpenAI and who left the company more than a year ago to pursue a new research opportunity. Hilton says that although companies like OpenAI commit to building AI safely, there is little oversight to ensure that is the case. “The protections that we’re asking for, they’re intended to apply to all frontier AI companies, not just OpenAI,” he says.

    “I left because I lost confidence that OpenAI would behave responsibly,” says Daniel Kokotajlo, a researcher who previously worked on AI governance at OpenAI. “There are things that happened that I think should have been disclosed to the public,” he adds, declining to provide specifics.

    Kokotajlo says the letter’s proposal would provide greater transparency, and he believes there’s a good chance that OpenAI and others will reform their policies given the negative reaction to news of non-disparagement agreements. He also says that AI is advancing with worrying speed. “The stakes are going to get much, much, much higher in the next few years, he says, “at least so I believe.”



    [ad_2]

    Source link

  • A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    A Nonprofit Tried to Fix Tech Culture—but Lost Control of Its Own

    [ad_1]

    Allen, a data scientist, and Massachi, a software engineer, worked for nearly four years at Facebook on some of the uglier aspects of social media, combating scams and election meddling. They didn’t know each other but both quit in 2019, frustrated at feeling a lack of support from executives. “The work that teams like the one I was on, civic integrity, was being squandered,” Massachi said in a recent conference talk. “Worse than a crime, it was a mistake.”

    Massachi first conceived the idea of using expertise like that he’d developed at Facebook to drive greater public attention to the dangers of social platforms. He launched the nonprofit Integrity Institute with Allen in late 2021, after a former colleague connected them. The timing was perfect: Frances Haugen, another former Facebook employee, had just leaked a trove of company documents, catalyzing new government hearings in the US and elsewhere about problems with social media. It joined a new class of tech nonprofits such as the Center for Humane Technology and All Tech Is Human, started by people working in industry trenches who wanted to become public advocates.

    Massachi and Allen infused their nonprofit, initially bankrolled by Allen, with tech startup culture. Early staff with backgrounds in tech, politics, or philanthropy didn’t make much, sacrificing pay for the greater good as they quickly produced a series of detailed how-to guides for tech companies on topics such as preventing election interference. Major tech philanthropy donors collectively committed a few million dollars in funding, including the Knight, Packard, MacArthur, and Hewlett foundations, as well as the Omidyar Network. Through a university-led consortium, the institute got paid to provide tech policy advice to the European Union. And the organization went on to collaborate with news outlets, including WIRED, to investigate problems on tech platforms.

    To expand its capacity beyond its small staff, the institute assembled an external network of two dozen founding experts it could tap for advice or research help. The network of so-called institute “members” grew rapidly to include 450 people from around the world in the following years. It became a hub for tech workers ejected during tech platforms’ sweeping layoffs, which significantly reduced trust and safety, or integrity, roles that oversee content moderation and policy at companies such as Meta and X. Those who joined the institute’s network, which is free but involves passing a screening, gained access to part of its Slack community where they could talk shop and share job opportunities.

    Major tensions began to build inside the institute in March last year, when Massachi unveiled an internal document on Slack titled “How We Work” that barred use of terms including “solidarity,” “radical,” and “free market,” which he said come off as partisan and edgy. He also encouraged avoiding the term BIPOC, an acronym for “Black, Indigenous, and people of color,” which he described as coming from the “activist space.” His manifesto seemed to echo the workplace principles that cryptocurrency exchange Coinbase had published in 2020, which barred discussions of politics and social issues not core to the company, drawing condemnation from some other tech workers and executives.

    “We are an internationally-focused open-source project. We are not a US-based liberal nonprofit. Act accordingly,” Massachi wrote, calling for staff to take “excellent actions” and use “old-fashioned words.” At least a couple of staffers took offense, viewing the rules as backward and unnecessary. An institution devoted to taming the thorny challenge of moderating speech now had to grapple with those same issues at home.

    [ad_2]

    Source link

  • Local Coworking Spaces Thrive Where WeWork Dared Not Go

    Local Coworking Spaces Thrive Where WeWork Dared Not Go

    [ad_1]

    The white colonial revival church with its high steeple adds an idyllic architectural touch to the affluent town of Huntington, a Long Island suburb of New York City. But a sign grabs the eye from the road: “Coworking space,” it says. “Kind of like a WeWork. Was a church, but not anymore.”

    The former church may have been leveled and replaced with condos, had Michael Hartofilis not bought it and repurposed it as a coworking venue called Main Space that opened earlier this year. What was once a sanctuary with a high ceiling has been split into two floors of coworking space, with cubicles, glass phone booths, and minimalist art. Industrial-style beams and modern, geometric light fixtures are juxtaposed with the preserved, intricate crown molding and artisan details that hug the building’s windows and doorways.

    I spent a morning working out of the bisected sanctuary, where cubicles with ergonomic desk chairs have replaced church pews. Neon signs and bright colors make it easy to forget Main Space was once a church, and it has all the amenities of a typical coworking space—a gym, ice bath, kitchen, various conference rooms with comfortable armchairs and patterned wallpaper, and an outdoor patio decorated with a string of lights. But it’s also embedded in the community. On a Thursday afternoon, people were scattered at desks throughout the building and in conference rooms, chatting with one another between their own business calls.

    “Ideally, it is local people” who sign up for the coworking space, says Hartofilis, who also heads an energy company and is working on a neighborhood social app. He’s hoping those who come feel like they’re part of something exclusive and get to know one another. But people have already come from neighboring towns, or used it as a meeting place between New York City and towns on Long Island. “There’s not a whole lot of supply as far as coworking spaces, there’s nothing like this.”

    The interior of a row of desks inside of a coworking space

    Courtesy of Main Space

    After Covid changed work patterns and styles, coworking is hanging on. The industry is growing and is expected to continue doing so—despite negative headlines about the company that brought coworking to the masses: WeWork. The coworking behemoth filed for bankruptcy in November, sparking concerns about the model after it took on office leases at a rapid pace and sought to sublease desks out at a premium. Rising interest rates and massive shifts in the office space marketplace following the Covid outbreak hammered the coworking giant, which was at one time valued at $47 billion. But WeWork is now preparing to right itself and exit bankruptcy at the end of May, getting $450 million in new investments and shedding excess office space after renegotiating leases. And industry experts say there’s lots of potential for coworking to mature.

    “Coworking is a great product,” says Jonathan Wasserstrum, a partner at Unwritten Capital, who has invested in Switchyards, a coworking company in the US southeast which shuns the title of coworking in favor of “work clubs.” The company has spaces in Atlanta; Nashville, Tennessee; and Charlotte, North Carolina. A former school, a motorcycle garage, a warehouse where elevators were tested, and a church are among its offerings. Coworking “is in high demand, and will continue to be in high demand,” Wasserstrum says.

    Many of the memberships at Switchyards’ locations are sold out. The company plans to have 25 clubs by the end of the year—with a total of 200 in the next five years. The design and music selection take inspiration from libraries, coffee shops, and hotel lobbies more than offices.

    [ad_2]

    Source link

  • OpenAI’s Long-Term AI Risk Team Has Disbanded

    OpenAI’s Long-Term AI Risk Team Has Disbanded

    [ad_1]

    In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power.

    Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts.

    Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board.

    Hours after Sutskever’s departure was announced on Tuesday, Jan Leike, the former DeepMind researcher who was the superalignment team’s other colead, posted on X that he had resigned.

    Neither Sutskever nor Leike responded to requests for comment, and they have not publicly commented on why they left OpenAI. Sutskever did offer support for OpenAI’s current path in a post on X. “The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial” under its current leadership, he wrote.

    The dissolution of OpenAI’s superalignment team adds to recent evidence of a shakeout inside the company in the wake of last November’s governance crisis. Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an internet forum post in his name.

    Two more OpenAI researchers working on AI policy and governance also appear to have left the company recently. Cullen O’Keefe left his role as research lead on policy frontiers in April, according to LinkedIn. Daniel Kokotajlo, an OpenAI researcher who has coauthored several papers on the dangers of more capable AI models, “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI,” according to a posting on an internet forum in his name. None of the researchers who have apparently left responded to requests for comment.

    OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more powerful models will now be led by John Schulman, who coleads the team responsible for fine-tuning AI models after training.



    [ad_2]

    Source link