Tag: security

  • Microsoft’s AI Recall Tool Is Still Sucking Up Credit Card and Social Security Numbers

    Microsoft’s AI Recall Tool Is Still Sucking Up Credit Card and Social Security Numbers

    [ad_1]

    What a week! On Monday, police arrested 26-year-old Luigi Mangione and charged him in the murder of UnitedHealthcare CEO Brian Thompson. Mangione’s five-day run from authorities ended after he was spotted eating at a McDonald’s in Altoona, Pennsylvania, about 300 miles from Manhattan, where Thompson was gunned down on the morning of December 4. Authorities say they found Mangione carrying fake IDs and a 3D-printed “ghost gun,” the model of which is known as the FMDA, or “Free Men Don’t Ask.”

    Meanwhile, a flood of mysterious drone sightings across New Jersey and neighboring states caused so much havoc, it quickly gained federal attention. While many people wondered why the US military couldn’t just shoot down the drones, the FBI, Department of Homeland Security, and independent experts say the drone mystery may not be much of a mystery, and the drones are probably mostly just airplanes.

    As for more terrestrial threats, we dove into the far-right realm of “Active Clubs,” small groups of young, fitness-focused men who are steeped in extremist ideology and linked to several violent attacks. While the man who helped invent the Active Club network, Robert Rundo, was sentenced in federal court this week, Active Clubs around the world are proliferating.

    Finally, we investigated cheating schemes that use tiny cameras to gain an illicit edge in poker, and we interrogated the ways humans will use generative AI to make the world a more dangerous place.

    But that’s not all. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    Back in May, Microsoft jubilantly announced Recall, an AI feature for some Windows PCs that silently takes screenshots every five seconds and then allows you to easily search through the resulting digital footprint. Forgotten where you saw a recipe online? Tapping a couple of keywords into Recall could, in theory, find the dish again. It didn’t take long for the privacy and security community to find gaping holes in the feature.

    In response, Microsoft delayed Recall’s launch and eventually made some significant changes—such as making Recall opt-in rather than on by default, better encrypting information captured by Recall, and adding authentication to access data that it stored. Recall finally launched for some users this month.

    However, this week, testing of Recall by Tom’s Hardware demonstrated that a key safeguard put in place by Microsoft can still fail. With a Recall setting called “filter sensitive information” turned on, Tom’s Hardware’s tests found that it still took screenshots of some sensitive information—such as credit card numbers and Social Security numbers. When the publication typed a credit card number and a username and password into a Notepad window, they were gathered in the screenshots. “Similarly, when I filled out a loan application PDF in Microsoft Edge, entering a social security number, name and DOB, Recall captured that,” Avram Piltch writes. The tool, however, didn’t record details when they were entered on a couple of online stores.

    [ad_2]

    Source link

  • Human Misuse Will Make Artificial Intelligence More Dangerous

    Human Misuse Will Make Artificial Intelligence More Dangerous

    [ad_1]

    OpenAI CEO Sam Altman expects AGI, or artificial general intelligence—AI that outperforms humans at most tasks—around 2027 or 2028. Elon Musk’s prediction is either 2025 or 2026, and he has claimed that he was “losing sleep over the threat of AI danger.” Such predictions are wrong. As the limitations of current AI become increasingly clear, most AI researchers have come to the view that simply building bigger and more powerful chatbots won’t lead to AGI.

    However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human misuse.

    These might be unintentional misuses, such as lawyers over-relying on AI. After the release of ChatGPT, for instance, a number of lawyers have been sanctioned for using AI to generate erroneous court briefings, apparently unaware of chatbots’ tendency to make stuff up. In British Columbia, lawyer Chong Ke was ordered to pay costs for opposing counsel after she included fictitious AI-generated cases in a legal filing. In New York, Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations. In Colorado, Zachariah Crabill was suspended for a year for using fictitious court cases generated using ChatGPT and blaming a “legal intern” for the mistakes. The list is growing quickly.

    Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. These images were created using Microsoft’s “Designer” AI tool. While the company had guardrails to avoid generating images of real people, misspelling Swift’s name was enough to bypass them. Microsoft has since fixed this error. But Taylor Swift is the tip of the iceberg, and non-consensual deepfakes are proliferating widely—in part because open-source tools to create deepfakes are available publicly. Ongoing legislation across the world seeks to combat deepfakes in hope of curbing the damage. Whether it is effective remains to be seen.

    In 2025, it will get even harder to distinguish what’s real from what’s made up. The fidelity of AI-generated audio, text, and images is remarkable, and video will be next. This could lead to the “liar’s dividend”: those in positions of power repudiating evidence of their misbehavior by claiming that it is fake. In 2023, Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations that the CEO had exaggerated the safety of Tesla autopilot leading to an accident. An Indian politician claimed that audio clips of him acknowledging corruption in his political party were doctored (the audio in at least one of his clips was verified as real by a press outlet). And two defendants in the January 6 riots claimed that videos they appeared in were deepfakes. Both were found guilty.

    Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can go badly wrong when such tools are used to classify people and make consequential decisions about them. Hiring company Retorio, for instance, claims that its AI predicts candidates’ job suitability based on video interviews, but a study found that the system can be tricked simply by the presence of glasses or by replacing a plain background with a bookshelf, showing that it relies on superficial correlations.

    There are also dozens of applications in health care, education, finance, criminal justice, and insurance where AI is currently being used to deny people important life opportunities. In the Netherlands, the Dutch tax authority used an AI algorithm to identify people who committed child welfare fraud. It wrongly accused thousands of parents, often demanding to pay back tens of thousands of euros. In the fallout, the Prime Minister and his entire cabinet resigned.

    In 2025, we expect AI risks to arise not from AI acting on its own, but because of what people do with it. That includes cases where it seems to work well and is over-relied upon (lawyers using ChatGPT); when it works well and is misused (non-consensual deepfakes and the liar’s dividend); and when it is simply not fit for purpose (denying people their rights). Mitigating these risks is a mammoth task for companies, governments, and society. It will be hard enough without getting distracted by sci-fi worries.

    [ad_2]

    Source link

  • US Officials Recommend Encryption Apps Amid Chinese Telecom Hacking

    US Officials Recommend Encryption Apps Amid Chinese Telecom Hacking

    [ad_1]

    A consortium of global law enforcement agencies led by Britain’s National Crime Agency announced a takedown operation this week against two major Russian money-laundering networks that process billions of dollars each year in more than 30 locations around the world. WIRED had exclusive access to the investigation, which uncovered new and troubling laundering techniques, particularly schemes to directly change cryptocurrency for cash. As the United States government scrambles to address China’s “Salt Typhoon” digital espionage campaign into US telecoms, two senators demanded this week that the Department of Defense investigate its failure to secure its own communications and address known vulnerabilities in US telecom infrastructure. Meanwhile, Signal Foundation president Meredith Whittaker spoke at WIRED’s The Big Interview event in San Francisco this week about Signal’s enduring commitment to bring private, end-to-end encrypted communication services to people all over the world regardless of geopolitical climate.

    A new smartphone scanner from the mobile device security firm iVerify can quickly and easily detect spyware and has already flagged seven devices infected with the invasive Pegasus surveillance tool. Programmer Micah Lee built a tool to help you save and delete your X posts after he offended Elon Musk and was banned from the platform. And privacy advocate Nighat Dad is fighting to protect women from digital harassment in Pakistan after escaping from an abusive marriage.

    The US Federal Trade Commission is targeting data brokers who it says unlawfully tracked protesters and US military personnel, but the enforcement efforts seem likely to trail off under the Trump administration. Similarly, the US Consumer Financial Protection Bureau has devised a strategy to impose new oversight on predatory data brokers, but the new administration may not continue the initiative. Some new laws are finally coming around the world in 2025 that will attempt to regulate the dysfunction of the digital advertising industry, but malicious advertising is still booming around the world and continues to play a big role in global scamming.

    And there’s more. Each week, we round up the security and privacy news we didn’t cover in-depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    Remember how the US federal government spent much of the last three decades periodically decrying the dangers of strong, freely available encryption tools, arguing that because they enable criminals and terrorists, they should be outlawed or required to implement government-approved backdoors? As of this week, the government will never again be able to make that argument without privacy advocates pointing to a particular phone call where two officials recommended Americans use exactly those encryption tools to protect themselves amidst an ongoing massive breach of US telecoms by Chinese hackers.

    In a briefing with reporters about the breach of no fewer than eight phone companies by the Chinese state-sponsored espionage hackers known as Salt Typhoon, officials from the Cybersecurity and Infrastructure Security Agency (CISA) and the FBI both said that amid the still-uncontrolled infiltration of US telecoms that have exposed calls and texts, Americans should use encryption apps to safeguard their privacy. “Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication,” said Jeff Greene, CISA’s executive assistant director for cybersecurity. (Signal and WhatsApp, for instance, end-to-end encrypt calls and texts, though the officials didn’t name any particular apps.)

    The recommendation amid what one senator has called “the worst telecom hack in our nation’s history” represents a stunning reversal from previous US officials’ rhetoric on encryption, and in particular the FBI’s repeated calls for access to backdoors in encryption. In fact, it was exactly this sort of government-approved wiretap capability requirement for US telecoms that the Salt Typhoon hackers in some cases exploited to access Americans communications.

    The hacker group known as Secret Blizzard, Snake, or Turla, widely believed to work for Russia’s FSB intelligence agency, is known for using some of the most ingenious hacking techniques ever seen to spy on its victims. One of the tricks that’s now become its signature move: hacking the infrastructure of other hackers to stealthily piggyback on their access. This week Microsoft’s threat intelligence researchers and security firm Lumen Technologies revealed that Turla gained access to the servers of a Pakistan-based hacker group and used its visibility into victim networks to spy on government, military and intelligence targets in India and Afghanistan of interest to the Kremlin. In some cases, Turla hijacked the Pakistani hackers’ access to install their own malware, while in other instances they appear to have used the other group’s tools for even greater stealth and deniability. The incident marks the fourth known time since 2017, when it penetrated an Iranian hacker group’s command-and-control servers, that Turla has freeloaded on another hacker group’s infrastructure and tooling, according to Lumen.

    The Russian government is known for turning a blind eye to cybercrime—until it doesn’t. This week 15 convicted members of the notorious dark web market Hydra learned the limits of that forbearance when they reportedly received prison sentences ranging from 8 years to 23 years, as well an unprecedented life sentence for the site’s creator Stanislav Moiseyev. Before it was taken down two years ago in a law enforcement operation led by IRS criminal investigators in the US and Germany’s BKA police agency, Hydra was a uniquely sprawling dark web marketplace, one that not only served as the post-Soviet world’s biggest online bazaar for narcotics but also a vast money laundering machine for crimes including ransomware, scams, and sanctions evasion. In total, Hydra enabled more than $5 billion dollars in dirty cryptocurrency transactions since 2015, according to crypto tracing firm Elliptic.

    Russian law enforcement charged and arrested a software developer last week who is suspected of prolific contributions to multiple ransomware groups, including building malware to extort money from businesses and other targets. The suspect is reportedly Mikhail Matveev, or “Wazawaka,” who has worked as an affiliate with ransomware gangs like Conti, LockBit, Babuk, DarkSide, and Hive. Social media reports indicate that Matveev confirmed his indictment and said that he has been released from law enforcement custody on bail.

    Russia’s prosecutor general did not name Matveev, but described charges last week against a 32-year-old hacker under Article 273 of Russia’s Criminal Code, which bans the creation or use of malware. The move came as Russia seemed to be sending some sort of message about its tolerance for cybercrime with the sentencing of the dark web marketplace Hydra’s staff, including a life sentence for its administrator. In 2023, the US government indicted and sanctioned Matveev.

    In a disturbing scoop (one we didn’t cover last week due to the Thanksgiving holiday), Reuters reporters have revealed that the FBI is now investigating a lobbying consultancy hired by Exxon over the firm’s role in a hack-and-leak operation that targeted climate change activists. DCI Group, a lobbying firm hired at the time by Exxon, allegedly gave a list of target activists to a private investigator who then outsourced a hacking operation against those targets to mercenary hackers. After the private investigator—an Israeli man named Amit Forlit, who was later arrested in London and faces US hacking charges—allegedly gave the hacked material to DCI, it leaked the activists’ internal communications about climate change litigation against Exxon to the media, Reuters discovered. The FBI, according to Reuters, has determined that DCI also first previewed that material to Exxon before leaking it. “Those documents were directly employed by Exxon to come after me with all guns blazing,” one attorney working with the activist group, the Center for Climate Integrity, told Reuters. “It turned my life upside down.”

    Exxon has denied knowing about any hacking activities and DCI told Reuters in a statement that “we direct all our employees and consultants to comply with the law.”

    [ad_2]

    Source link

  • She Escaped an Abusive Marriage—Now She Helps Women Battle Cyber Harassment

    She Escaped an Abusive Marriage—Now She Helps Women Battle Cyber Harassment

    [ad_1]

    Nighat Dad grew up in a conservative family in Jhang, in Pakistan’s Punjab province. The threat of early marriage hung over her childhood like a cloud. But despite their traditional values, Dad’s parents were determined that all their children get an education, and they moved the family to Karachi so she could complete her bachelor’s degree. “I never really thought I would work because I was never taught that we could work and be independent,” she says. “We always needed permission to do anything.”

    Dad thought a master’s in law might delay the inevitable betrothal, but soon after she completed the course, she found out her parents had arranged a marriage for her. She didn’t mind her new life of domestic chores in a household she describes as “lower-middle class”—that is, until the abuse started. “That’s when my legal education reminded me that this was wrong,” she says. “Our laws, our constitution, everything protects me, so why was I facing this? Why was I tolerating it?”

    With her family’s backing, Dad left her husband and filed for divorce. But after years of domestic violence and abuse and with no experience of working, she struggled with a lack of confidence. “I had no idea that women who are divorced and have a child face such difficulties in a society like ours,” she says. When her ex-husband filed a custody case for their two-month-old baby, Dad wasn’t sure how she would pay for a lawyer. That’s when her father reminded her that she was a lawyer too.

    Dad used her degree to win custody of her only child. In the process, she realized how many women in Pakistan were facing years of violence and systemic injustice. But the thing that bothered her most was the digital divide.

    Before her marriage, Dad’s family never allowed her access to her own cell phone, and when she finally did get one, her husband would use it as a surveillance tool—keeping track of who she called and who was texting her. She had an escape tool in her hand, but she couldn’t use it. “Going through that by myself made me realize how quickly technology is evolving, and how it’s creating virtual spaces for marginalized communities that might not have access to physical ones,” she says. “Facing those restrictions made me understand just how crucial it is to challenge societal norms and structures around women’s access to technology and the internet, so they can use it as freely as men.”

    In 2012, Dad established the Digital Rights Foundation, an NGO that aims to address the digital divide and fight online abuse of women and other gender minorities in Pakistan. She began by helping women who reached out to the organization, providing advice on digital safety and emotional and mental support. In 2016—the same year Pakistan finally passed legislation against online crimes—Dad and her team launched a cyber-harassment helpline. Since 2016, it has addressed more than 16,000 complaints from across the country. “Sometimes, the police would give our phone numbers to victims seeking reliable help,” she says.

    [ad_2]

    Source link

  • A New Phone Scanner That Detects Spyware Has Already Found 7 Pegasus Infections

    A New Phone Scanner That Detects Spyware Has Already Found 7 Pegasus Infections

    [ad_1]

    In recent years, commercial spyware has been deployed by more actors against a wider range of victims, but the prevailing narrative has still been that the malware is used in targeted attacks against an extremely small number of people. At the same time, though, it has been difficult to check devices for infection, leading individuals to navigate an ad hoc array of academic institutions and NGOs that have been on the front lines of developing forensic techniques to detect mobile spyware. On Tuesday, the mobile device security firm iVerify is publishing findings from a spyware detection feature it launched in May. Of 2,500 device scans that the company’s customers elected to submit for inspection, seven revealed infections by the notorious NSO Group malware known as Pegasus.

    The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool regularly checks devices for potential compromise. But the company also offers a free version of the feature for anyone who downloads the iVerify Basics app for $1. These users can walk through steps to generate and send a special diagnostic utility file to iVerify and receive analysis within hours. Free users can use the tool once a month. iVerify’s infrastructure is built to be privacy-preserving, but to run the Mobile Threat Hunting feature, users must enter an email address so the company has a way to contact them if a scan turns up spyware—as it did in the seven recent Pegasus discoveries.

    “The really fascinating thing is that the people who were targeted were not just journalists and activists, but business leaders, people running commercial enterprises, people in government positions,” says Rocky Cole, chief operating officer of iVerify and a former US National Security Agency analyst. “It looks a lot more like the targeting profile of your average piece of malware or your average APT group than it does the narrative that’s been out there that mercenary spyware is being abused to target activists. It is doing that, absolutely, but this cross section of society was surprising to find.”

    Seven out of 2,500 scans may sound like a small group, especially in the somewhat self-selecting customer base of iVerify users, whether paying or free, who want to be monitoring their mobile device security at all, much less checking specifically for spyware. But the fact that the tool has already found a handful of infections at all speaks to how widely the use of spyware has proliferated around the world. Having an easy tool for diagnosing spyware compromises may well expand the picture of just how often such malware is being used.

    “NSO Group sells its products exclusively to vetted US & Israel-allied intelligence and law enforcement agencies,” NSO Group spokesperson Gil Lainer told WIRED in a statement. “Our customers use these technologies daily.”

    iVerify says that it took significant investment to develop the detection tool because mobile operating systems like Android, and particularly iOS, are more locked down than traditional desktop operating systems and don’t allow monitoring software to have kernel access at the heart of the system. Cole says that the crucial insight was to use telemetry taken from as close to the kernel as possible to tune machine learning models for detection. Some spyware, like Pegasus, also has characteristic traits that make it easier to flag. In the seven detections, Mobile Threat Hunting caught Pegasus using diagnostic data, shutdown logs, and crash logs. But the challenge, Cole says, is in refining mobile monitoring tools to reduce false positives.

    Developing the detection capability has already been invaluable, though. Cole says that it helped iVerify identify signs of compromise on the smartphone of Gurpatwant Singh Pannun, a lawyer and Sikh political activist who was the target of an alleged, foiled assassination attempt by an Indian government employee in New York City. The Mobile Threat Hunting feature also flagged suspected nation state activity on the mobile devices of two Harris-Walz campaign officials—a senior member of the campaign and an IT department member—during the presidential race.

    “The age of assuming that iPhones and Android phones are safe out of the box is over,” Cole says. “The sorts of capabilities to know if your phone has spyware on it were not widespread. There were technical barriers and it was leaving a lot of people behind. Now you have the ability to know if your phone is infected with commercial spyware. And the rate is much higher than the prevailing narrative.”

    Updated at 12:12 pm EST, December 4, 2024, to include a statement from NSO Group.

    [ad_2]

    Source link

  • With Threats to Encryption Looming, Signal’s Meredith Whittaker Says ‘We’re Not Changing’

    With Threats to Encryption Looming, Signal’s Meredith Whittaker Says ‘We’re Not Changing’

    [ad_1]

    “We don’t want to be the outlier that proves the rule, we want to be a new set of rules leading the way to a much more open and diverse tech ecosystem,” Whittaker said, “that isn’t reliant on like five companies and 15 guys and a paradigm that is very, very stale and ultimately not healthy for the world and the future.”

    It costs around $50 million per year to run Signal and Whittaker noted at the event that there are no easy answers to finding that type of funding—or more—for projects that need consistent, independent, and secure backing without being subject to the forces of data monetization and surveillance capitalism.

    “None of this is simple, friend,” Whittaker said. “There’s a type of capital we need. How do we get it?”

    The first Trump presidency in the United States was increasingly hostile to encryption and independent tech, so with a new Trump administration looming and anti-encryption advocates making inroads in governments around the world, what comes next for Signal?

    “Signal knows who we are. Signal will continue being Signal,” Whittaker says. “Signal has one thing we do and we do it really well and we do it pretty obsessively, and that is: provide truly private communications infrastructure to everyone, everywhere globally. Full stop. We’re not changing.”

    [ad_2]

    Source link

  • Are You Being Tracked by an AirTag? Here’s How to Check

    Are You Being Tracked by an AirTag? Here’s How to Check

    [ad_1]

    While some guides to finding AirTags recommend using Bluetooth scanners, Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation does not consider this method to be reliable for tracker searching. “I have tried using various Bluetooth scanners in order to detect AirTags, and they do not work all the time,” she says.

    Millions of Americans still do not own a smartphone. Without a device on hand, you must rely on visual and audible clues to find any hidden AirTags. The circular white disc is slightly larger than a quarter. As reported by The New York Times, Ashley Estrada discovered an AirTag lodged under her license plate, and her video documenting the incident was viewed more than 22 million times on TikTok.

    When the AirTag was first released, the tracker would emit a beeping noise if away from the owner for longer than three days. Apple has since shortened the time to 24 hours or less. Despite the update, you might not want to rely only on sound to detect AirTags. Numerous videos on YouTube offer DIY instructions to disable the speaker, and noiseless versions of the trackers were even listed for a short time on Etsy.

    What if I Find One?

    The best way to disable an AirTag is to remove the battery. To do this, flip the AirTag so the metallic side with an Apple logo is facing you. Press down on the logo and turn counterclockwise. Now you will be able to remove the cover and pop out that battery.

    Apple’s support page for the AirTag suggests reaching out to the police if you believe you are in a dangerous situation. “If you feel your safety is at risk, contact your local law enforcement, who can work with Apple to request information related to the item,” the support page reads. “You might need to provide the AirTag, set of AirPods, Find My network accessory, and the device’s serial number.” One way to figure out the serial number is to hold the top of an iPhone or other near-field-communication-enabled smartphone to the white side of an AirTag. A website with the serial number will pop up.

    This page may also include a partial phone number from the person who owns the tracking device. If you feel hesitant about scanning the AirTag or do not have the ability, a serial number is printed on the device beneath the battery.

    Who Does This Impact?

    In the viral stories shared online and in police reports, women are often the victims of AirTag stalking, but when WIRED spoke to Galperin in 2022 she cautioned against framing unwanted tracking as solely an issue for women. “I have been working with victims of tech-enabled abuse for many years,” she says, “About two-thirds of the survivors that come to me are women. But a third of them are men. I suspect that number would be higher if there wasn’t such a stigma around being an abuse victim or survivor.”

    She emphasized how men, women, and nonbinary people can all be victims of abuse, as well as perpetrators. “When we paint it all with this really broad brush, we make it really hard for victims who don’t fit that mold to come forward,” says Galperin. Instances of tech-enabled abuse don’t follow simplistic narratives and can impact anyone.

    For more resources, you can visit the website for the National Domestic Violence Hotline. Contact the hotline by calling 1-800-799-7233 or texting “START” to 88788.

    December 2, 2024: This article has been updated to reflect recent changes to how iOS, Android, and AirTags operate.



    [ad_2]

    Source link

  • Malicious Ads in Search Results Are Driving New Generations of Scams

    Malicious Ads in Search Results Are Driving New Generations of Scams

    [ad_1]

    Researchers regularly see malicious ads in search results representing themselves as coming from legitimate businesses and organizations. Whether it’s a regional municipality, a utility like a power company, or a big business, people will use search engines simply to pull up the URL of an organization. And if the first results or the most convenient results to click on are ads, scammers have the opportunity to buy this real estate.

    “The volume of these things is immense,” says Sean Gallagher, the senior threat researcher at Sophos. “Search engines like Google will say they check the content of ads to ensure they’re safe, but the thing is that attackers are using ad delivery networks and can redirect the URL after the ad is paid for.”

    Google is clearly aware that malicious ad activity is growing and evolving. The company specifically addresses misleading and fraudulent ad activity in its policies, including a “misrepresentation policy,” and says that it takes numerous approaches to vetting ads and detecting malvertising. Attackers have continued to develop circumvention methods, though, to avoid having their ads flagged or removed. In 2023, Google blocked or removed about 5.5 billion ads and suspended more than 12.7 million advertiser accounts.

    The company has also taken steps over the years to label ads clearly and delineate them in the search results layout. Still, any search engine that’s supported by ads ultimately has the two types of content side by side, especially on mobile where users have limited screen space.

    “We expressly prohibit ads that attempt to circumvent our enforcement by disguising the advertiser’s identity to deceive users and distribute malware,” Google spokesperson 
    Nate Funkhouser told WIRED in a statement. “When we identify an ad that violates this policy, we remove it and suspend the associated advertiser account as quickly as possible.”

    Sophos’s Gallagher points out that criminals can often get the most for their money when buying ads for more unique searches, where they can dominate the ad space and get to the top of the results more organically. But both Sophos and Malwarebytes researchers also regularly see malicious ads running against frequent searches like those for Google, Walmart, Disney+, Slack, Lowe’s, and Apple. Segura even says that Malwarebytes itself has to invest heavily in buying search engine ads just to keep malvertising at bay for the company’s brand.

    “We have to defend our brand so much,” he says. “People take advantage of that.”

    [ad_2]

    Source link

  • The Pressure Is on for Big Tech to Regulate the Broken Digital Advertising Industry

    The Pressure Is on for Big Tech to Regulate the Broken Digital Advertising Industry

    [ad_1]

    Digital advertising is a whopping $700 billion (£530 billion) industry that remains largely unregulated, with few laws in place to protect brands and consumers. Companies and brands advertising products often don’t know which websites display their ads. I run Check My Ads, an ad tech watchdog, and we constantly deal with situations where advertisers and citizens have been the victims of lies, scams, and manipulations. We have removed ads from websites with serious disinformation about Covid-19, false election content, and even AI-generated obituaries.

    Currently, if a brand wants to advertise a product, Google facilitates the ad placement based on desired ad reach and metrics. It may technically follow through on the agreement by delivering views and clicks, but does not provide transparent data about how and where the ad views came from. It is possible that the ad was shown on unsavory websites diametrically opposed to the brand’s values. For example, in 2024, Google was found to be profiting by placing product ads on websites that promoted hardcore pornography, disinformation, and even hate speech, against the brands’ wishes.

    In 2025, however, this scandal will end, as we start to enact the first regulations targeting the digital advertising industry. Around the world, lawmakers in Brussels, Ottawa, Washington, and London are already in the early stages of developing regulation that will ensure brands have access to the legal support to ask questions, check ad data, and receive automatic refunds when they find that their digital campaigns have been subject to fraud or safety violations.

    In Canada, for example, Parliament is deliberating the enactment of the Online Harms Act, a law to incentivize the removal of sexual content involving minors. The idea behind this law is that if the content is illegal, then making money off it should be illegal, too.

    In California and New York, advocates are also proposing legislation that will aim to implement a know-your-customer law to track the global financial trade of advertising. This is significant because these two states power the global ad tech industry. New York has more ad tech companies than any other city in the world. Transparency laws enacted in California, on the other hand, would affect Google’s international advertising business—by far the biggest ad tech company in the world.

    Beyond brand and consumer issues, the unregulated nature of the digital advertising landscape is a direct threat to democracy. In the US, for instance, presidential campaign spending remains effectively unregulated. It is estimated that the presidential campaigns will spend up to $2 billion (£1.5 billion) on digital advertising in 2024. With current laws, we will likely have no external data about their refunds or rates.

    In 2025, the legislative pressure is on for big tech companies to regulate ad technology.

    [ad_2]

    Source link

  • Emergency Vehicle Lights Can Screw Up a Car’s Automated Driving System

    Emergency Vehicle Lights Can Screw Up a Car’s Automated Driving System

    [ad_1]

    Tesla, which disbanded its public relations team in 2021, did not respond to WIRED’s request for comment. The camera systems the researchers used in their tests were manufactured by HP, Pelsee, Azdome, Imagebon, and Rexing; none of those companies responded to WIRED’s requests for comment.

    Although the NHTSA acknowledges issues in “some advanced driver assistance systems,” the researchers are clear: They’re not sure what this observed emergency light effect has to do with Tesla’s Autopilot troubles. “I do not claim that I know why Teslas crash into emergency vehicles,” says Nassi. “I do not know even if this is still a vulnerability.”

    The researchers’ experiments were also concerned solely with image-based object detection. Many automakers use other sensors, including radar and lidar, to help detect obstacles in the road. A smaller crop of tech developers—Tesla among them—argue that image-based systems augmented with sophisticated artificial intelligence training can enable not only driver assistance systems, but also completely autonomous vehicles. Last month, Tesla CEO Elon Musk said the automaker’s vision-based system would enable self-driving cars next year.

    Indeed, how a system might react to flashing lights depends on how individual automakers design their automated driving systems. Some may choose to “tune” their technology to react to things it’s not entirely certain are actually obstacles. In the extreme, that choice could lead to “false positives,” where a car might hard brake, for example, in response to a toddler-shaped cardboard box. Others may tune their tech to react only when it’s very confident that what it’s seeing is an obstacle. On the other side of the extreme, that choice could lead to the car failing to brake to avoid a collision with another vehicle because it misses that it is another vehicle entirely.

    The BGU and Fujitsu researchers did come with a software fix to the emergency flasher issue. Called “Caracetamol”—a portmanteau of “car” and the painkiller “Paracetamol”—it’s designed to avoid the “seizure” issue by being specifically trained to identify vehicles with emergency flashing lights. The researchers say it improves object detectors’ accuracy.

    Earlence Fernandes, an assistant professor of computer science and engineering at University of California, San Diego, who was not involved in the research, said it appeared “sound.” “Just like a human can get temporarily blinded by emergency flashers, a camera operating inside an advanced driver assistance system can get blinded temporarily,” he says.

    For researcher Bryan Reimer, who studies vehicle automation and safety at the MIT AgeLab, the paper points to larger questions about the limitations of AI-based driving systems. Automakers need “repeatable, robust validation” to uncover blind spots like susceptibility to emergency lights, he says. He worries some automakers are “moving technology faster than they can test it.”

    [ad_2]

    Source link