Tag: eu

  • The EU Has New Carry-On Luggage Rules. Here’s What to Know Before You Fly

    The EU Has New Carry-On Luggage Rules. Here’s What to Know Before You Fly

    [ad_1]

    If you’re taking a flight to any country that’s a member of the European Union—and there are 27 of them—then there are some updated carry-on luggage rules you need to make yourself aware of before you turn up at the airport. When you pass through security, agents will ask you to remove liquids and electronics from your carry-on so they can be scanned.

    In theory, these rule changes are only temporary: They’re a stopgap solution while we wait for the next generation of security scanners to go fully live. The implementation of these C3 scanners, which can properly analyze liquids and electronics so they don’t have to be taken out of your hand luggage, has been delayed beyond the original June 2024 deadline.

    The official implementation date for these new carry-on baggage rules is September 1, 2024, so they are already in effect. There’s no fixed date as to when they will be relaxed, because there are a lot of factors at play. It’s likely the rules will be in place until at least the middle of 2025.

    Which Airports Are Affected?

    To be clear, these aren’t brand-new rules for your carry-on luggage. What’s happening is that EU airports are reverting to the previous set of rules about what types of things need to be taken out of your carry-on for inspection when you pass through security.

    All airports in EU countries are affected, as are some airports in the UK (including Heathrow, Gatwick, Stansted, and Manchester) and airports in Iceland, Switzerland, Liechtenstein, and Norway.

    Strictly speaking, only airports that have C3 scanners installed should be rolling back their rules; other airports that never installed the C3 scanners have continued to follow the old procedures. The rollout of the new tech has been costlier and taken longer than expected, and there are still bugs in the system—so the old security rules are once again required.

    Officially, it’s a “tech issue” with the new equipment: Although the machines have been installed in a number of airports, it seems their scanning capabilities aren’t quite up to the high level required. While that gets sorted out, the scanners can’t be relied upon to spot dangerous contents in luggage.

    Considering getting items in and out of bags always takes time, and bearing in mind that some passengers aren’t going to know exactly what they’re supposed to be doing, you might want to leave some extra time in your schedule to allow for queues and delays.

    What Are the Rules?

    To guard against the threat of explosives, all liquids and electronics will need to be taken out of bags and scanned separately. In addition, liquids should be inside containers no bigger than 100 milliliters (3.4 liquid ounces) and are to be placed in a clear plastic bag of around 20 x 20 cm (7.9 x 7.9 inches).

    This “100 ml rule” applies to all liquids, including (but not limited to) drinks, semiliquid foods like soups, cosmetics and toiletries, sprays, toothpaste, shower gel, hair gel, and contact lens solution. As usual, these liquids and typical electronics can be put in your checked luggage with no issue.

    Exceptions to the 100-ml rule are sometimes made for those traveling with small babies and for those with special dietary and health requirements (including people who need to carry medication). If you fit into these categories, you must check in advance with the airport, and if you’re taking medication with you then you may need a doctor’s note.

    For seasoned travelers, this is all going to be pretty familiar—but hopefully, as the new baggage scanners start to come online, the security checkpoint process at airports should become more streamlined and faster overall. If you’re in any doubt about the rules, check with your airline and the airport involved close to the time you’ve traveling.

    Finally, a note on something that isn’t changing, at least not yet: While there have been rumors that the EU is going to apply rules on standardized case sizes for carry-on luggage, nothing has been decided. The idea has been discussed, but for the time being there’s no single size standard.

    [ad_2]

    Source link

  • My Memories Are Just Meta’s Training Data Now

    My Memories Are Just Meta’s Training Data Now

    [ad_1]

    In R. C. Sherriff’s novel The Hopkins Manuscript, readers are transported to a world 800 years after a cataclysmic event ended Western civilization. In pursuit of clues about a blank spot in their planet’s history, scientists belonging to a new world order discover diary entries in a swamp-infested wasteland formerly known as England. For the inhabitants of this new empire, it is only through this record of a retired school teacher’s humdrum rural life, his petty vanities and attempts to breed prize-winning chickens, that they begin to learn about 20th-century Britain.

    If I were to teach futuristic beings about life on earth, I once believed I could produce a time capsule more profound than Sherriff’s small-minded protagonist, Edgar Hopkins. But scrolling through my decade-old Facebook posts this week, I was presented with the possibility that my legacy may be even more drab.

    Earlier this month, Meta announced that my teenage status updates were exactly the kind of content it wants to pass on to future generations of artificial intelligence. From June 26, old public posts, holiday photos, and even the names of millions of Facebook and Instagram users around the world would effectively be treated as a time capsule of humanity and transformed into training data.

    That means my mundane posts about university essay deadlines (“3 energy drinks down 1,000 words to go”) as well as unremarkable holiday snaps (one captures me slumped over my phone on a stationary ferry) are about to become part of that corpus. The fact that these memories are so dull, and also very personal, makes Meta’s interest more unsettling.

    The company says it is only interested in content that is already public: private messages, posts shared exclusively with friends, and Instagram Stories are out of bounds. Despite that, AI is suddenly feasting on personal artifacts that have, for years, been gathering dust in unvisited corners of the internet. For those reading from outside Europe, the deed is already done. The deadline announced by Meta applied only to Europeans. The posts of American Facebook and Instagram users have been training Meta AI models since 2023, according to company spokesperson Matthew Pollard.

    Meta is not the only company turning my online history into AI fodder. WIRED’s Reece Rogers recently discovered that Google’s AI search feature was copying his journalism. But finding out which personal remnants exactly are feeding future chatbots was not easy. Some sites I’ve contributed to over the years are hard to trace. Early social network Myspace was acquired by Time Inc. in 2016, which in turn was acquired by a company called Meredith Corporation two years later. When I asked Meredith about my old account, they replied that Myspace had since been spun off to an advertising firm, Viant Technology. An email to a company contact listed on its website was returned with a message that the address “couldn’t be found.”

    Asking companies still in business about my old accounts was more straightforward. Blogging platform Tumblr, owned by WordPress owner Automattic, said unless I’d opted out, the public posts I made as a teenager will be shared with “a small network of content and research partners, including those that train AI models” per a February announcement. YahooMail, which I used for years, told me that a sample of old emails—which have apparently been “anonymized” and “aggregated”—are being “utilized” by an AI model internally to do things like summarize messages. Microsoft-owned LinkedIn also said my public posts were being used to train AI although some “personal” details included in those posts were excluded, according to a company spokesperson, who did not specify what those personal details were.

    [ad_2]

    Source link

  • Europe Scrambles for Relevance in the Age of AI

    Europe Scrambles for Relevance in the Age of AI

    [ad_1]

    That concentration of power is uncomfortable for European governments. It makes European companies downstream customers of the future, importing the latest services and technology in exchange for money and data sent westward across the Atlantic. And these concerns have taken on a new urgency—partly because some in Brussels perceive a growing gap in values and beliefs between Silicon Valley and the median EU citizen and their elected representatives; and partly because AI looms large in the collective imagination as the engine of the next technological revolution.

    European fears of lagging in AI predate ChatGPT. In 2018, the European Commission issued an AI plan calling for “AI made in Europe” that could compete with the US and China. But beyond a desire for some kind of control over the shape of technology, the operational definition of AI sovereignty has become pretty fuzzy. “For some people, it means we need to get our act together to fight back against Big Tech,” Daniel Mügge, professor of political arithmetic at the University of Amsterdam, who studies technology policy in the EU, says. “To others, it means there’s nothing wrong with Big Tech, as long as it’s European, so let’s get cracking and make it happen.”

    Those competing priorities have begun to complicate EU regulation. The bloc’s AI Act, which passed the European Parliament in March and is likely to become law this summer, has a heavy focus on regulating potential harms and privacy concerns around the technology. However, some member states, notably France, made clear during negotiations over the law that they fear regulation could shackle their emerging AI companies, which they hope will become European alternatives to OpenAI.

    Speaking before last November’s UK summit on AI safety, French finance minister Bruno Le Maire said that Europe needed to “innovate before it regulates” and that the continent needed “European actors mastering AI.” The AI Act’s final text includes a commitment to making the EU “a leader in the uptake of trustworthy AI.”

    “The Italians and the Germans and the French at the last minute thought: ‘Well, we need to cut European companies some slack on foundation models,’” Mügge says. “That is wrapped up in this idea that Europe needs European AI. Since then, I feel that people have realized that this is a little bit more difficult than they would like.”

    Sarlin, who has been on a tour of European capitals recently, including meeting with policymakers in Brussels, says that Europe does have some of the elements it needs to compete. To be a player in AI, you have to have data, computing power, talent, and capital, he says.

    Data is fairly widely available, Sarlin adds, and Europe has AI talent, although it sometimes struggles to retain it.

    To marshal more computing power, the EU is investing in high-performance computing resources, building a pan-European network of high-performance computing facilities, and offering startups access to supercomputers via its “AI Factories” initiative.

    Accessing the capital needed to build big AI projects and companies is also challenging, with a wide gulf between the US and everyone else. According to Stanford University’s AI Index report, private investment in US AI companies topped $67 billion in 2023, more than 35 times the amount invested in Germany or France. Research from Accel Partners shows that in 2023, the seven largest private investment rounds by US generative AI companies totaled $14 billion. The top seven in Europe totaled less than $1 billion.

    [ad_2]

    Source link

  • Why the EU’s Vice President Isn’t Worried About Moon-Landing Conspiracies on YouTube

    Why the EU’s Vice President Isn’t Worried About Moon-Landing Conspiracies on YouTube

    [ad_1]

    When European Union vice president Věra Jourová met with YouTube CEO Neal Mohan in California last week, they fell to talking about the long-running conspiracy theory that the moon landings were fake. YouTube has faced calls from some users and advocacy groups to remove videos that question the historic missions. Like other videos denying accepted science, they have been booted from recommendations and have a Wikipedia link added to direct viewers to debunking context.

    But as Mohan spoke about those measures, Jourová made something clear: Fighting lunar lunatics or flat-earthers shouldn’t be a priority. “If the people want to believe it, let them do,” she said. As the official charged with protecting Europe’s democratic values, she thinks it’s more important to make sure YouTube and other big platforms don’t spare a euro that could be invested in fact-checking or product changes to curb false or misleading content that threatens the EU’s security.

    “We are focusing on the narratives which have the potential to mislead voters, which could create big harm to society,” Jourová tells WIRED in an interview. Unless conspiracy theories could lead to deaths, violence, or pogroms, she says, don’t expect the EU to be demanding action against them. Content like the recent fake news report announcing that Poland is mobilizing its troops in the middle of an election? That better not catch on as truth online.

    In Jourová’s view, her conversation with Mohan and similar discussions she held last week with the CEOs of TikTok, X, and Meta show how the EU is helping companies understand what it takes to counter disinformation, as is now required under the bloc’s tough new Digital Services Act. Its requirements include that starting this year the internet’s biggest platforms, including YouTube, have to take steps to combat disinformation or risk fines up to 6 percent of their global sales.

    Civil liberties activists have been concerned that the DSA ultimately could enable censorship by the bloc’s more authoritarian regimes. A strong showing by far-right candidates in the EU’s parliamentary elections taking place later this week also could lead to its uneven enforcement.

    YouTube spokesperson Nicole Bell says the company is aligned with Jourová on preventing egregious real-world harm and also removing content that misleads voters on how to vote or encourages interference in the democratic processes. “Our teams will continue to work around the clock,” Bell says of monitoring problematic videos about this week’s EU elections.

    Jourová, who expects her five year term to end later this year, in part because her Czech political party, ANO, is no longer in power at home in Czechia to renominate her, contends that the DSA is not meant to enable anything more than appropriate moderation of the most egregious content. She doesn’t expect Mohan or any other tech executive to go a centimeter beyond what the law prescribes. “Overusage, overshooting on the basis of the EU legislation would be a big failure and a big danger,” she says.

    On the other hand, she acknowledges that if the companies aren’t seen to be stepping up to mitigate disinformation, then some influential politicians have threatened to seek stiffer rules that could border on outright censorship. “I hate this idea,” she says. “We don’t want this to happen.”

    But with the DSA offering guidelines more than bright lines, how are platforms to know when to act? Jourova’s “democracy tour” in Silicon Valley, as she calls it, is part of facilitating a dialog on policy. And she expects social media researchers, experts, and the press to all contribute to figuring out the fuzzy borders between free expression and destructive disinformation. She jokes that she doesn’t want to be seen as the “European Minister of the Truth,” as tempting as that title may be. Leaving it to politicians alone to define what’s acceptable online “would pave the way to hell,” she says.

    [ad_2]

    Source link

  • Germany’s Far-Right Party Is Running Hateful Ads on Facebook and Instagram

    Germany’s Far-Right Party Is Running Hateful Ads on Facebook and Instagram

    [ad_1]

    Earlier this month, a German court ruled that the country’s nationalist far-right party, Alternative for Germany (AfD), was potentially “extremist” and could warrant surveillance by the country’s intelligence apparatus.

    Campaign ads placed by AfD have been allowed to appear on Facebook and Instagram anyway, according to a new report from the nonprofit advocacy organization Ekō shared exclusively with WIRED. Researchers found 23 ads that accrued 472,000 views from the party on Facebook and Instagram that appear to violate Meta’s own policies around hate speech.

    The ads push the narrative that immigrants are dangerous and a burden on the German state ahead of the European Union’s elections in June.

    One ad placed by AfD politician Gereon Bollman asserts that Germany has seen “an explosion of sexual violence” since 2015, specifically blaming immigrants from Turkey, Syria, Afghanistan, and Iraq. The ad was seen by between 10,000 and 15,000 people in just four days, between March 16 and 20, 2024. Another ad, which had over 60,000 views, features a man of color lying in a hammock. Overlaid text reads, “AfD reveals: 686,000 illegal foreigners live at our expense!”

    Ekō was also able to identify at least three ads that appear to have used generative AI to manipulate images, though only one was run after Meta put its manipulated media policy into place. One shows a white woman with visible injuries, with accompanying text saying “the connection between migration and crime has been denied for years.”

    “Meta, and indeed other companies, have very limited ability to detect third party tools that generate AI imagery,” says Vicky Wyatt, senior campaign director at Ekō. “When extremist parties use those tools with their ads, they can create incredibly emotive imagery that can really move people. So it’s incredibly worrying.”

    In its submission to the European Commission’s consultation on election guidelines, obtained by a freedom of information request made by Ekō, Meta says “it is not yet possible for providers to identify all AI-generated content, particularly when actors take steps to seek to avoid detection, including by removing invisible markers.”

    Meta’s own policies prohibit ads that “claim people are threats to the safety, health, or survival of others based on their personal characteristics” and ads that “include generalizations that state inferiority, other statements of inferiority, expressions of contempt, expressions of dismissal, expressions of disgust, or cursing based on immigration status.”

    “We do not allow hate speech on our platforms and have Community Standards that apply to all content – including ads,” says Meta spokesperson Daniel Roberts. “Our ads review process has several layers of analysis and detection, both before and after an ad goes live, and this system is one of many we have in place to protect European elections.” Roberts told WIRED the company plans to review the ads flagged by Ekō but didn’t respond to questions about whether the German court’s designation of the AfD as potentially extremist would invite further scrutiny from Meta.

    Targeted ads, says Wyatt, can be powerful because extremist groups can more effectively target people that might sympathize with their views and “use Meta’s ads library to reach them.” Wyatt also says this allows the group to test which messages are more likely to resonate with voters.

    [ad_2]

    Source link

  • Apple’s iPhone Browser-Choice Option Sucks. Its Competitors Have Ideas to Improve It

    Apple’s iPhone Browser-Choice Option Sucks. Its Competitors Have Ideas to Improve It

    [ad_1]

    A few representatives from smaller browser companies also expressed that they wanted more information included with Apple’s choice process, like definitions of what a browser is for less tech-savvy users and descriptions of the different browsers’ specialties. “Giving people information about the choice, and also information about what they’re choosing is really, really important,” says Kush Amlani, a global competition and regulatory counsel at Mozilla, which makes the Firefox browser.

    Sophie Dembinski, a head of public policy and climate action at Ecosia, mentioned how Apple’s pop-up appears for all iPhone users even if they’ve already gone into their phone’s settings and set an alternative browser as their default. In comparison, Google’s browser choice screen for Android users won’t show up if you’ve already gone through the steps of setting a preference for a third-party option.

    While many developers are unhappy with Apple’s implementation, not every company with a browser on the choice screen expressed frustration. “We believe that Apple’s approach to presenting the browser choice screen is fair and acceptable,” says Andrew Moroz Frost, the Aloha Browser founder. He pointed out the randomized order of the browsers shown on the pop-up as one example of Apple designing it in a fair manner.

    Richard Socher, the founder and CEO of You.com, seemed more encouraged by there being a browser choice screen that includes the search-focused startup rather than frustrated by Apple’s implementation. “I think it’s great that there’s not the default already preselected,” he says. Socher highlighted the randomized order as a positive sign as well.

    Is this choice screen a true turning point for alternative browsers to grow their user base? “We’re expecting to have a clear picture on user uplift within months, not weeks,” says Dembinski. While some browsers reported initial upticks in downloads, it still seems too soon to make sweeping generalizations about the long-term efficacy of Apple’s choice screen.

    “We would like to encourage platform providers to also level out the playing field for app developers around the world, not just in the EU,” says Jan Standal, a vice president of product marketing at Opera. Some of the companies WIRED spoke with remain hopeful that the precedent of browser choice screens set by the DMA will inspire international software changes.

    Shortly after Apple’s choice screen launched, the European Commision announced that the screen would be part of its wider investigation into how Apple, Google, and Meta might be breaking these updated regulations: “The Commission is concerned that Apple’s measures, including the design of the web browser choice screen, may be preventing users from truly exercising their choice of services within the Apple ecosystem, in contravention of Article 6(3) of the DMA.” In keeping with its slow-moving tradition, this investigation may take up to a year to complete.

    [ad_2]

    Source link

  • Meta Kills a Crucial Transparency Tool At the Worst Possible Time

    Meta Kills a Crucial Transparency Tool At the Worst Possible Time

    [ad_1]

    Earlier this month, Meta announced that it would be shutting down CrowdTangle, the social media monitoring and transparency tool that has allowed journalists and researchers to track the spread of mis- and disinformation. It will cease to function on August 14, 2024—just months before the US presidential election.

    Meta’s move is just the latest example of a tech company rolling back transparency and security measures as the world enters the biggest global election year in history. The company says it is replacing CrowdTangle with a new Content Library API, which will require researchers and nonprofits to apply for access to the company’s data. But the Mozilla Foundation and 140 other civil society organizations protested last week that the new offering lacks much of CrowdTangle’s functionality, asking the company to keep the original tool operating until January 2025.

    Meta spokesperson Andy Stone countered in posts on X that the groups’ claims “are just wrong,” saying the new Content Library will contain “more comprehensive data than CrowdTangle” and be made available to nonprofits, academics, and election integrity experts. But Meta did not respond to questions about why commercial newsrooms, like WIRED, are to be excluded.

    Brandon Silverman, cofounder and former CEO of CrowdTangle, who continued to work on the tool after Facebook acquired it in 2016, says it’s time to force platforms to open up their data to outsiders. The conversation has been edited for length and clarity.

    Vittoria Elliott: CrowdTangle has been incredibly important for journalists and researchers trying to hold tech companies accountable for the spread of mis- and disinformation. But it belongs to Meta. Could you talk a little bit about that tension?

    Brandon Silverman: I think there’s a bit too much of a public narrative that frustration with [New York Times columnist] Kevin Roose’ tweets is why they turned their back on CrowdTangle. I think the truth is that Facebook is moving out of news entirely.

    When CrowdTangle joined Facebook, they were all in on news and bought us to help the news industry. Fast forward three years later, they are like, “We’re done with that project.” There is a lot of responsibility that comes with hosting news on a platform, especially if you exist in essentially every community on Earth. I think that they made a calculus at some point that it just wasn’t worth what it would cost to do responsibly.

    My takeaway when I left was that if you want to do this work in a way that really serves civil society in the way we need it to, you can’t do it inside the companies—and Meta was doing more than almost anyone else. It’s abundantly clear that we need our regulators and elected officials to decide what we, as a society, want and expect from these platforms and to make those [demands] legally required.

    What would that look like?

    I think we’re at the very beginning of an entire ecosystem of better tools doing this work. The European Union’s sweeping Digital Services Act has a bunch of transparency requirements around data sharing. One of those they sometimes call the CrowdTangle provision—it requires qualifying platforms to provide real-time access to public data.

    Over a dozen platforms now have new programs that allow outside researchers to get access to real-time public content. Alibaba, TikTok, YouTube—which has been a black box forever—are now spinning up these programs. It’s been very quiet, because they don’t necessarily want a ton of people using them. In some cases companies add these programs to their terms of service but don’t make any public announcement.



    [ad_2]

    Source link