Tag: security

  • Security News This Week: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

    Security News This Week: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

    [ad_1]

    After Apple’s product launch event this week, WIRED did a deep dive on the company’s new secure server environment, known as Private Cloud Compute, which attempts to replicate in the cloud the security and privacy of processing data locally on users’ individual devices. The goal is to minimize possible exposure of data processed for Apple Intelligence, the company’s new AI platform. In addition to hearing about PCC from Apple’s senior vice president of software engineering, Craig Federighi, WIRED readers also received a first look at content generated by Apple Intelligence’s “Image Playground” feature as part of crucial updates on the recent birthday of Federighi’s dog Bailey.

    Turning to privacy protection of a very different kind in another new AI service, WIRED looked at how users of the social media platform X can keep their data from being slurped up by the “unhinged” generative AI tool from xAI known as Grok AI. And in other news about Apple products, researchers developed a technique for using eye tracking to discern passwords and PINs people typed using 3D Apple Vision Pro avatars—a sort of keylogger for mixed reality. (The flaw that made the technique possible has since been patched.)

    On the national security front, the US this week indicted two people accused to spreading propaganda meant to inspire “lone wolf” terrorist attacks. The case, against alleged members of the far-right network known as the Terrorgram Collective, marks a turn in how the US cracks down on neofascist extremists.

    And there’s more. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    OpenAI’s generative AI platform ChatGPT is designed with strict guardrails that keep the service from offering advice on dangerous and illegal topics like tips on laundering money or a how-to guide for disposing of a body. But an artist and hacker who goes by “Amadon” figured out a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it into a science-fiction fantasy story in which the system’s restrictions didn’t apply. Amadon then got ChatGPT to spit out instructions for making dangerous fertilizer bombs. An OpenAI spokesperson did not respond to TechCrunch’s inquiries about the research.

    “It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them. The goal isn’t to hack in a conventional sense but to engage in a strategic dance with the AI, figuring out how to get the right response by understanding how it ‘thinks,’” Amadon told TechCrunch. “The sci-fi scenario takes the AI out of a context where it’s looking for censored content … There really is no limit to what you can ask it once you get around the guardrails.”

    In the fervent investigations following the September 11, 2001, terrorist attacks in the United States, the FBI and CIA both concluded that it was coincidental that a Saudi Arabian official had helped two of the hijackers in California and that there had not been high-level Saudi involvement in the attacks. The 9/11 commission incorporated that determination, but some findings indicated subsequently that the conclusions might not be sound. With the 23-year anniversary of the attacks this week, ProPublica published new evidence “suggest[ing] more strongly than ever that at least two Saudi officials deliberately assisted the first Qaida hijackers when they arrived in the United States in January 2000.”

    The evidence comes primarily from a federal lawsuit against the Saudi government brought by survivors of the 9/11 attacks and relatives of victims. A judge in New York will soon make a decision in that case about a Saudi motion to dismiss. But evidence that has already emerged in the case, including videos and documents such as telephone records, points to possible connections between the Saudi government and the hijackers.

    “Why is this information coming out now?” said retired FBI agent Daniel Gonzalez, who pursued the Saudi connections for almost 15 years. “We should have had all of this three or four weeks after 9/11.”

    The United Kingdom’s National Crime Agency said on Thursday that it arrested a teenager on September 5 as part of the investigation into a cyberattack on September 1 on the London transportation agency Transport for London (TfL). The suspect is a 17-year-old male and was not named. He was “detained on suspicion of Computer Misuse Act offenses” and has since been released on bail. In a statement on Thursday, TfL wrote, “Our investigations have identified that certain customer data has been accessed. This includes some customer names and contact details, including email addresses and home addresses where provided.” Some data related to the London transit payment cards known as Oyster cards may have been accessed for about 5,000 customers, including bank account numbers. TfL is reportedly requiring roughly 30,000 users to appear in person to reset their account credentials.

    In a decision on Tuesday, Poland’s Constitutional Tribunal blocked an effort by Poland’s lower house of parliament, known as the Sejm, to launch an investigation into the country’s apparent use of the notorious hacking tool known as Pegasus while the Law and Justice (PiS) party was in power from 2015 to 2023. Three judges who had been appointed by PiS were responsible for blocking the inquiry. The decision cannot be appealed. The decision is controversial, with some, like Polish parliament member Magdalena Sroka, saying that it was “dictated by the fear of liability.”

    [ad_2]

    Source link

  • Apple Vision Pro’s Eye Tracking Exposed What People Type

    Apple Vision Pro’s Eye Tracking Exposed What People Type

    [ad_1]

    The GAZEploit attack consists of two parts, says Zhan, one of the lead researchers. First, the researchers created a way to identify when someone wearing the Vision Pro is typing by analyzing the 3D avatar they are sharing. For this, they trained a recurrent neural network, a type of deep learning model, with recordings of 30 people’s avatars while they completed a variety of typing tasks.

    When someone is typing using the Vision Pro, their gaze fixates on the key they are likely to press, the researchers say, before quickly moving to the next key. “When we are typing our gaze will show some regular patterns,” Zhan says.

    Wang says these patterns are more common during typing than if someone is browsing a website or watching a video while wearing the headset. “During tasks like gaze typing, the frequency of your eye blinking decreases because you are more focused,” Wang says. In short: Looking at a QWERTY keyboard and moving between the letters is a pretty distinct behavior.

    The second part of the research, Zhan explains, uses geometric calculations to work out where someone has positioned the keyboard and the size they’ve made it. “The only requirement is that as long as we get enough gaze information that can accurately recover the keyboard, then all following keystrokes can be detected.”

    Combining these two elements, they were able to predict the keys someone was likely to be typing. In a series of lab tests, they didn’t have any knowledge of the victim’s typing habits, speed, or know where the keyboard was placed. However, the researchers could predict the correct letters typed, in a maximum of five guesses, with 92.1 percent accuracy in messages, 77 percent of the time for passwords, 73 percent of the time for PINs, and 86.1 percent of occasions for emails, URLs, and webpages. (On the first guess, the letters would be right between 35 and 59 percent of the time, depending on what kind of information they were trying to work out.) Duplicate letters and typos add extra challenges.

    “It’s very powerful to know where someone is looking,” says Alexandra Papoutsaki, an associate professor of computer science at Pomona College who has studied eye tracking for years and reviewed the GAZEploit research for WIRED.

    Papoutsaki says the work stands out as it only relies on the video feed of someone’s Persona, making it a more “realistic” space for an attack to happen when compared to a hacker getting hands-on with someone’s headset and trying to access eye tracking data. “The fact that now someone, just by streaming their Persona, could expose potentially what they’re doing is where the vulnerability becomes a lot more critical,” Papoutsaki says.

    While the attack was created in lab settings and hasn’t been used against anyone using Personas in the real world, the researchers say there are ways hackers could have abused the data leakage. They say, theoretically at least, a criminal could share a file with a victim during a Zoom call, resulting in them logging into, say, a Google or Microsoft account. The attacker could then record the Persona while their target logs in and use the attack method to recover their password and access their account.

    Quick Fixes

    The GAZEpolit researchers reported their findings to Apple in April and subsequently sent the company their proof-of-concept code so the attack could be replicated. Apple fixed the flaw in a Vision Pro software update at the end of July, which stops the sharing of a Persona if someone is using the virtual keyboard.

    An Apple spokesperson confirmed the company fixed the vulnerability, saying it was addressed in VisionOS 1.3. The company’s software update notes do not mention the fix. The researchers say Apple assigned CVE-2024-40865 for the vulnerability and recommend people download the latest software updates.

    [ad_2]

    Source link

  • Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    Apple Intelligence Promises Better AI Privacy. Here’s How It Actually Works

    [ad_1]

    Apple is making every production PCC server build publicly available for inspection so people unaffiliated with Apple can verify that PCC is doing (and not doing) what the company claims, and that everything is implemented correctly. All of the PCC server images are recorded in a cryptographic attestation log, essentially an indelible record of signed claims, and each entry includes a URL for where to download that individual build. PCC is designed so Apple can’t put a server into production without logging it. And in addition to offering transparency, the system works as a crucial enforcement mechanism to prevent bad actors from setting up rogue PCC nodes and diverting traffic. If a server build hasn’t been logged, iPhones will not send Apple Intelligence queries or data to it.

    PCC is part of Apple’s bug bounty program, and vulnerabilities or misconfigurations researchers find could be eligible for cash rewards. Apple says, though, that since the iOS 18.1 beta became available in late July, no on has found any flaws in PCC so far. The company recognizes that it has only made the tools to evaluate PCC available to a select group of researchers so far.

    Multiple security researchers and cryptographers tell WIRED that Private Cloud Compute looks promising, but they haven’t spent significant time digging into it yet.

    “Building Apple silicon servers in the data center when we didn’t have any before, building a custom OS to run in the data center was huge,” Federighi says. He adds that “creating the trust model where your device will refuse to issue a request to a server unless the signature of all the software the server is running has been published to a transparency log was certainly one of the most unique elements of the solution—and totally critical to the trust model.”

    To questions about Apple’s partnership with OpenAI and integration of ChatGPT, the company emphasizes that partnerships are not covered by PCC and operate separately. ChatGPT and other integrations are turned off by default, and users must manually enable them. Then, if Apple Intelligence determines that a request would be better fulfilled by ChatGPT or another partner platform, it notifies the user each time and asks whether to proceed. Additionally, people can use these integrations while logged into their account for a partner service like ChatGPT or can use them through Apple without logging in separately. Apple said in June that another integration with Google’s Gemini is also in the works.

    Apple said this week that beyond launching in United States English, Apple Intelligence is coming to Australia, Canada, New Zealand, South Africa, and the United Kingdom in December. The company also said that additional language support—including for Chinese, French, Japanese, and Spanish—will drop next year. Whether that means that Apple Intelligence will be permitted under the European Union’s AI Act and whether Apple will be able to offer PCC in its current form in China is another question.

    “Our goal is to bring ideally everything we can to provide the best capabilities to our customers everywhere we can,” Federighi says. “But we do have to comply with regulations, and there is uncertainty in certain environments we’re trying to sort out so we can bring these features to our customers as soon as possible. So, we’re trying.”

    He adds that as the company expands its ability to do more Apple Intelligence computation on-device, it may be able to use this as a workaround in some markets.

    Those who do get access to Apple Intelligence will have the ability to do far more than they could with past versions of iOS, from writing tools to photo analysis. Federighi says that his family celebrated their dog’s recent birthday with an Apple Intelligence–generated GenMoji (viewed and confirmed to be very cute by WIRED). But while Apple’s AI is meant to be as helpful and invisible as possible, the stakes are incredibly high for the security of the infrastructure underpinning it. So how are things going so far? Federighi sums it up without hesitation: “The rollout of Private Cloud Compute has been delightfully uneventful.”

    [ad_2]

    Source link

  • Hackers Threaten to Leak Planned Parenthood Data

    Hackers Threaten to Leak Planned Parenthood Data

    [ad_1]

    Even those of you who do everything you can to secure those secrets can find yourself vulnerable—especially if you’re using a YubiKey 5 authentication token. The multifactor authentication devices can be cloned thanks to a cryptographic flaw that can’t be patched. The company has rolled out some mitigation measures—and the attack itself is relatively difficult to pull off. But it may be time to invest in a new dongle.

    That’s not all, folks. Each week, we round up the privacy and security news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    At the end of August, cybercriminals from the ransomware group RansomHub appear to have hacked into the systems of Planned Parenthood’s Montana branch. The organization this week confirmed it had suffered from a “cybersecurity incident” on August 28 and said its staff immediately took parts of its network offline, reporting the incident to law enforcement.

    Days after the incident took place, RansomHub claimed to be behind the attack, posting Planned Parenthood on its leak website. The criminal group said it would publish 93 GB of data. It is unclear what, if anything, the ransomware group has obtained, but Planned Parenthood clinics can hold a huge array of highly sensitive data about patients, including information on abortion appointments. (Around 400,000 Planned Parenthood patients in Los Angeles were impacted following a similar ransomware incident in 2021.)

    In recent months, RansomHub has emerged as one of the most active ransomware-as-a-service groups, following the law enforcement disruption of LockBit. According to an FBI and Cybersecurity and Infrastructure Security Agency alert at the end of August, the group is “efficient and successful” and has stolen data from at least 210 victims since it formed in February. “The affiliates leverage a double-extortion model by encrypting systems and exfiltrating data to extort victims,” the alert said.

    The Nigeria-based scammers known as the Yahoo Boys run almost every scam in the playbook—from romance scams to pretending to be FBI agents. Yet there’s little-more devious than the increase in sextortion cases linked to the West African scammers. This week, Nigerian brothers Samuel Ogoshi and Samson Ogoshi were sentenced to more than 17 years in US jail for running sextortion scams, following their extradition earlier this year. It is the first time Nigerian scammers have been prosecuted for sextortion in the US, the BBC reported.

    The Ogoshi brothers, who pleaded guilty in April, have been linked to the death of 17-year-old Jordan DeMay, who took his life six hours after he started talking to the scammers, who posed as a girl, on Instagram. The teenager had been duped into sending the brothers explicit images, and after he had done so, they threatened to post the images online unless he paid them hundreds of dollars. US prosecutors said the brothers sexually exploited and extorted more than 100 victims, with at least 11 of them being minors. There has been a huge spike in sextortion cases in recent years.

    In June, the US Commerce Department banned the sale of Kaspersky’s antivirus tools over national security concerns about its links to the Russian government. (Kaspersky has, for years, denied connections). The firm later fired its workers and said it was closing its US business. This week, cybersecurity company Pango Group announced it is purchasing Kaspersky Lab’s US antivirus customers, according to Axios. This equates to around 1 million customers, who will be transitioned to Pango’s antivirus software Ultra AV. Ahead of the Kaspersky deal, parent company Aura also announced it was spinning out Pango Group into its own business. Pango’s president said customers would not need to take any action and that it would allow subscribers to continue to receive updates after September 29, when Kaspersky updates will stop.

    For years, the EU has been trying to introduce new child protection laws that would require private chats to be scanned for child sexual abuse material—something that would potentially undermine encrypted messaging apps that provide everyday privacy to billions of people. The plans have been highly controversial and were shelved earlier this year. However, the proposed law, which has been dubbed “chat control,” reappeared in legislators’ in-trays this week. The Council of the EU, which is currently chaired by Hungary, wants to pass legislation by October, but reports say strong resistance to the plans still remain.

    [ad_2]

    Source link

  • Therapy Sessions Exposed by Mental Health Care Firm’s Unsecured Database

    Therapy Sessions Exposed by Mental Health Care Firm’s Unsecured Database

    [ad_1]

    Thousands of people’s highly sensitive health details, including audio and video of therapy sessions, were openly accessible on the internet, new research has revealed. The cache of information, associated with a US health care firm, included more than 120,000 files and more than 1.7 million activity logs.

    At the end of August, security researcher Jeremiah Fowler discovered the exposed trove of information in an unsecured database linked to virtual medical provider Confidant Health. The company, which operates across five states including Connecticut, Florida, and Texas, helps provide alcohol- and drug-addiction recovery, alongside mental health treatments and other services.

    Within the 5.3 terabytes of exposed data were extremely personal details about patients that go beyond personal therapy sessions. Files seen by Fowler included multiple-page reports of people’s psychiatry intake notes and details of the medical histories. “At the bottom of some of the documents it said ‘confidential health data,’” Fowler says.

    For instance, one seven-page psychiatry intake file, which appeared to be based on an hour session with a patient, details issues with alcohol and other substances, including how the patient claimed to have taken “small amounts” of narcotics from their grandparent’s hospice supply before the family member passed away. In another document, a mother describes the “contentious” relationship between her husband and son, including that while her son was using stimulants he accused her partner of sexual abuse.

    The exposed health documents include some medical notes on people’s appearance, mood, memory, their medications, and overall mental status. One spreadsheet seen by the researcher appears to list Confidant Health members, the number of appointments they’ve had, the types of appointment, and more.

    “There’s some heartbreaking, really painful family trauma, personal trauma,” Fowler says, adding that some of the files were audio and videos of patient sessions. “It’s almost like having your deepest darkest secrets that you’ve told your diary revealed, and it’s things that you never want to get out.”

    Alongside the medical files in the exposed database were administration and verification documents, including copies of driver’s licenses, ID cards, and insurance cards, Fowler says. The logs also contained indications that some data is collected by chatbots or artificial intelligence, making references to prompts and AI responses to questions.

    Confidant Health quickly shut off access to the exposed database after Fowler contacted the company, he says. The researcher, who alerts companies to exposed data and does not download any of it, says a proportion of the 120,000 files that were exposed had some form of password protection in place. Fowler says he reviewed around 1,000 files to verify the exposure and determine the source of the data so he could alert the company. He says it is unusual that an exposed database would include both locked and unlocked files.

    In a statement to WIRED, Confidant Health cofounder Jon Read says the company takes security concerns seriously and “take[s] issue with the sensational nature” of the findings. Read says once the company had been notified of the “improper configuration,” access to the exposed files was “fixed in less than an hour.”

    [ad_2]

    Source link

  • YubiKeys Are a Security Gold Standard—but They Can Be Cloned

    YubiKeys Are a Security Gold Standard—but They Can Be Cloned

    [ad_1]

    The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-sized device vulnerable to cloning when an attacker gains temporary physical access to it, researchers said Tuesday.

    The cryptographic flaw, known as a side channel, resides in a small microcontroller used in a large number of other authentication devices, including smartcards used in banking, electronic passports, and the accessing of secure areas. While the researchers have confirmed all YubiKey 5 series models can be cloned, they haven’t tested other devices using the microcontroller, such as the SLE78 made by Infineon and successor microcontrollers known as the Infineon Optiga Trust M and the Infineon Optiga TPM. The researchers suspect that any device using any of these three microcontrollers and the Infineon cryptographic library contains the same vulnerability.

    Patching Not Possible

    YubiKey maker Yubico issued an advisory in coordination with a detailed disclosure report from NinjaLab, the security firm that reverse engineered the YubiKey 5 series and devised the cloning attack. All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. That leaves all affected YubiKeys permanently vulnerable.

    “An attacker could exploit this issue as part of a sophisticated and targeted attack to recover affected private keys,” the advisory confirmed. “The attacker would need physical possession of the YubiKey, Security Key, or YubiHSM; knowledge of the accounts they want to target; and specialized equipment to perform the necessary attack. Depending on the use case, the attacker may also require additional knowledge, including username, PIN, account password, or authentication key.”

    Side channels are the result of clues left in physical manifestations such as electromagnetic emanations, data caches, or the time required to complete a task that leaks cryptographic secrets. In this case, the side channel is the amount of time taken during a mathematical calculation known as a modular inversion. The Infineon cryptolibrary failed to implement a common side-channel defense known as constant time as it performs modular inversion operations involving the Elliptic Curve Digital Signature Algorithm. Constant time ensures the time-sensitive cryptographic operations execute is uniform rather than variable depending on the specific keys.

    More precisely, the side channel is located in the Infineon implementation of the Extended Euclidean Algorithm, a method for, among other things, computing the modular inverse. By using an oscilloscope to measure the electromagnetic radiation while the token is authenticating itself, the researchers can detect tiny execution time differences that reveal a token’s ephemeral ECDSA key, also known as a nonce. Further analysis allows the researchers to extract the secret ECDSA key that underpins the entire security of the token.

    In Tuesday’s report, NinjaLab cofounder Thomas Roche wrote:

    In the present work, NinjaLab unveils a new side-channel vulnerability in the ECDSA implementation of Infineon 9 on any security microcontroller family of the manufacturer. This vulnerability lies in the ECDSA ephemeral key (or nonce) modular inversion, and, more precisely, in the Infineon implementation of the Extended Euclidean Algorithm (EEA for short). To our knowledge, this is the first time an implementation of the EEA is shown to be vulnerable to side-channel analysis (contrarily to the EEA binary version). The exploitation of this vulnerability is demonstrated through realistic experiments and we show that an adversary only needs to have access to the device for a few minutes. The offline phase took us about 24 hours; with more engineering work in the attack development, it would take less than one hour.

    After a long phase of understanding Infineon implementation through side-channel analysis on a Feitian 10 open JavaCard smartcard, the attack is tested on a YubiKey 5Ci, a FIDO hardware token from Yubico. All YubiKey 5 Series (before the firmware update 5.7 11 of May 6th, 2024) are affected by the attack. In fact all products relying on the ECDSA of Infineon cryptographic library running on an Infineon security microcontroller are affected by the attack. We estimate that the vulnerability exists for more than 14 years in Infineon top secure chips. These chips and the vulnerable part of the cryptographic library went through about 80 CC certification evaluations of level AVA VAN 4 (for TPMs) or AVA VAN 5 (for the others) from 2010 to 2024 (and a bit less than 30 certificate maintenances).

    [ad_2]

    Source link

  • We Hunted Hidden Police Signals at the DNC

    We Hunted Hidden Police Signals at the DNC

    [ad_1]

    As thousands took to the streets during August’s Democratic National Convention in Chicago to protest Israel’s deadly assault on Gaza, a massive security operation was already underway. US Capitol Police, Secret Service, the Department of Homeland Security’s Homeland Security Investigations, sheriff’s deputies from nearby counties, and local officers from across the country had descended on Chicago and were all out in force, working to manage the crowds and ensure the event went off without any major disruptions.

    Amid the headlines and the largely peaceful protests, WIRED was looking for something less visible. We were investigating reports of cell site simulators (CSS), also known as IMSI catchers or Stingrays, the name of one of the technology’s earliest devices, developed by Harris Corporation. These controversial surveillance tools mimic cell towers to trick phones into connecting with them. Activists have long worried that the devices, which can capture sensitive data such as location, call metadata, and app traffic, might be used against political activists and protesters.

    Armed with a waist pack stuffed with two rooted Android phones and three Wi-Fi hot spots running CSS-detection software developed by the Electronic Frontier Foundation, a digital-rights nonprofit, we conducted a first-of-its-kind wireless survey of the signals around the DNC.

    WIRED attended protests across the city, events at the United Center (where the DNC took place), and social gatherings with lobbyists, political figures, and influencers. We spent time walking the perimeter along march routes and through planned protest sites before, during, and after these events.

    In the process we captured Bluetooth, Wi-Fi, and cellular signals. We then analyzed those signals looking for specific hardware identifiers and other suspicious signs that could indicate the presence of a cell-site simulator. Ultimately, we did not find any evidence that cell-site simulators were deployed at the DNC. Nevertheless, when taken together, the hundreds of thousands of data points we accumulated in Chicago reveal how the invisible signals from our devices can create vulnerabilities for activists, police, and everyone in between. Our investigation revealed signals from as many as 297,337 devices, including as many as 2,568 associated with a major police body camera manufacturer, five associated with a law enforcement drone maker, and a massive array of consumer electronics like cameras, hearing aids, internet-of-things devices, and headphones.

    WIRED often observed the same devices appearing in different locations, revealing their operators’ patterns of movement. For example, a Chevrolet Wi-Fi hotspot, initially located in the law-enforcement-only parking lot of the United Center, was later found parked on a side street next to a downtown Chicago demonstration. A Wi-Fi signal from a Skydio police drone that hovered above a massive anti-war protest was detected again the next day above the Israeli consulate. And Axon police body cameras with identical hardware identifiers were detected at various protests occurring days apart.

    “Surveillance technologies leave traces that can be discovered in real time,” says Cooper Quintin, a senior staff technologist at the EFF. Regardless of the specific technologies WIRED detected, Quintin notes that the ability to identify police technology in real time is significant. “Many of our devices are beaconing in ways that make it possible to track us through wireless signals,” he says. While this makes it possible to track police, Quintin says, “this makes protesters similarly vulnerable to the same types of attacks.”

    The signals we collected are a byproduct of our extremely networked world and demonstrate a pervasive and unsettling reality: Military, law enforcement, and consumer devices constantly emit signals that can be intercepted and tracked by anyone with the right tools. In the context of high-stakes scenarios such as election events, gatherings of high-profile politicians and other officials, and large-scale protests, the findings have implications for law enforcement and protesters alike.

    [ad_2]

    Source link

  • Powerful Spyware Exploits Enable a New String of ‘Watering Hole’ Attacks

    Powerful Spyware Exploits Enable a New String of ‘Watering Hole’ Attacks

    [ad_1]

    In recent years, elite commercial spyware vendors like Intellexa and NSO Group have developed an array of powerful hacking tools that exploit rare and unpatched “zero-day” software vulnerabilities to compromise victim devices. And increasingly, governments around the world have emerged as the prime customers for these tools, compromising the smartphones of opposition leaders, journalists, activists, lawyers, and others. On Thursday, though, Google’s Threat Analysis Group is publishing findings about a series of recent hacking campaigns—seemingly carried out by Russia’s notorious APT29 Cozy Bear gang—that incorporate exploits very similar to ones developed by Intellexa and NSO Group into ongoing espionage activity.

    Between November 2023 and July 2024, the attackers compromised Mongolian government websites and used the access to conduct “watering hole” attacks, in which anyone with a vulnerable device who loads a compromised website gets hacked. The attackers set up the malicious infrastructure to use exploits that “were identical or strikingly similar to exploits previously used by commercial surveillance vendors Intellexa and NSO Group,” Google’s TAG wrote on Thursday. The researchers say they “assess with moderate confidence” that the campaigns were carried out by APT29.

    These spyware-esque hacking tools exploited vulnerabilities in Apple’s iOS and Google’s Android that had largely already been patched. Originally, they were deployed by the spyware vendors as unpatched, zero-day exploits, but in this iteration, the suspected Russian hackers were using them to target devices that hadn’t been updated with these fixes.

    “While we are uncertain how suspected APT29 actors acquired these exploits, our research underscores the extent to which exploits first developed by the commercial surveillance industry are proliferated to dangerous threat actors,” the TAG researchers wrote. “Moreover, watering hole attacks remain a threat where sophisticated exploits can be utilized to target those that visit sites regularly, including on mobile devices. Watering holes can still be an effective avenue for … mass targeting a population that might still run unpatched browsers.”

    It is possible that the hackers purchased and adapted the spyware exploits or that they stole them or acquired them through a leak. It is also possible that the hackers were inspired by commercial exploits and reverse engineered them by examining infected victim devices.

    Between November 2023 and February 2024, the hackers used an iOS and Safari exploit that was technically identical to an offering that Intellexa had first debuted a couple of months earlier as an unpatched zero-day in September 2023. In July 2024, the hackers also used a Chrome exploit adapted from an NSO Group tool that first appeared in May 2024. This latter hacking tool was used in combination with an exploit that had strong similarities to one Intellexa debuted back in September 2021.

    When attackers exploit vulnerabilities that have already been patched, the activity is known as “n-day exploitation,” because the vulnerability still exists and can be abused in unpatched devices as time passes. The suspected Russian hackers incorporated the commercial spyware adjacent tools, but constructed their overall campaigns—including malware delivery and activity on compromised devices—differently than the typical commercial spyware customer would. This indicates a level of fluency and technical proficiency characteristic of an established and well-resourced state-backed hacking group.

    “In each iteration of the watering hole campaigns, the attackers used exploits that were identical or strikingly similar to exploits from [commercial surveillance vendors], Intellexa and NSO Group,” TAG wrote. “We do not know how the attackers acquired these exploits. What is clear is that APT actors are using n-day exploits that were originally used as 0-days by CSVs.”

    [ad_2]

    Source link

  • The US Navy Has Run Out of Pants

    The US Navy Has Run Out of Pants

    [ad_1]

    The United States Defense Department has ideas about a dramatic strategy for defending Taiwan against a Chinese military offensive that would involve deploying an “unmanned hellscape” consisting of thousands of drones buzzing around the island nation. Meanwhile, the US National Institute of Standards and Technology announced a red-team hacking competition this week with the AI ethics nonprofit Humane Intelligence to find flaws and biases in generative AI systems.

    WIRED took a closer look at the Telegram channel and website known as Deep State that uses public data and secret intelligence to power its live-tracker map of Ukraine’s evolving front line. Protesters went to Citi Field in New York on Wednesday to raise awareness about the serious privacy risks of deploying facial recognition systems at sporting venues. The technology has increasingly been implemented at stadiums and arenas across the country with little oversight. And Amazon Web Services updated its instructions for how customers should implement authentication in its Application Load Balancer, after researchers found an implementation issue that they say could expose misconfigured web apps.

    But wait, there’s more! Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.

    US Navy officials confirmed to Military.com this week that pants for the standard Navy Working Uniform (NWU) are out of stock at Navy Exchanges and are in perilously low supply across the sea service’s distribution channels. The Navy’s Exchange Service Command is “experiencing severe shortages of NWU trousers” both in stores and online, according to spokesperson Courtney Williams. Sailors have been noticing out-of-stock notifications online, which state that pants are “not available for purchase in any size.” Williams said that current stock around the world is at 13 percent and that the top priority right now is providing pants to new recruits at Recruit Training Command in Illinois, the Naval Academy Preparatory School in Rhode Island, and the officer training schools.

    The shortage seems to have resulted from issues with the Defense Logistics Agency’s pants pipeline. Military.com reports that signs currently inside Navy Exchanges say the shortage is “due to Defense Logistics Agency vendor issues.” Williams said the Command has “been in communication with DLA on a timeline for the uniform’s production and supply chain.”

    Mikia Muhammad, a spokesperson for the Defense Logistics Agency, told Military.com that the first pants restocks are scheduled for October, but these supplies will go to recruits and training programs. She said that Navy exchanges should expect “full support” beginning in January.

    A joint statement on Monday by the FBI, the Office of the Director of National Intelligence, and the Cybersecurity and Infrastructure Security Agency formally accused Iran of conducting a hack-and-leak operation against Donald Trump’s presidential campaign. Trump himself had accused Iran in a social media post on August 10, following a report from Microsoft on August 9 about Iranian hackers targeting US political campaigns. The Iranian government denies the accusation.

    “The [Intelligence Community] is confident that the Iranians have through social engineering and other efforts sought access to individuals with direct access to the presidential campaigns of both political parties,” the US agencies wrote. “Such activity, including thefts and disclosures, are intended to influence the US election process.”

    Politico reported on August 10 that Iran had breached the Trump campaign, and an entity calling itself “Robert” had contacted the publication offering alleged stolen documents. The same entity also contacted The New York Times and The Washington Post hawking similar documents.

    The popular flight-tracking service FlightAware said this week that a “configuration error” in its systems exposed personal customer data, including names, email addresses, and even some Social Security numbers. The company discovered the exposure on July 25 but said in a breach notification to the attorney general of California that the situation may date as far back as January 2021. The company is mandating that all affected users reset their account passwords.

    The company said in its public statement that the exposed data includes “user ID, password, and email address. Depending on the information you provided, the information may also have included your full name, billing address, shipping address, IP address, social media accounts, telephone numbers, year of birth, last four digits of your credit card number, information about aircraft owned, industry, title, pilot status (yes/no), and your account activity (such as flights viewed and comments posted).” It also said in its disclosure to California, “Additionally, our investigation has revealed that your Social Security Number may have been exposed.”

    Since European law enforcement agencies hacked the end-to-end encrypted phone company Sky in 2021, the communications they compromised have been used as evidence in numerous EU investigations and criminal cases. But a review of court records by 404 Media and Court Watch showed this week that US agencies have also been leaning on the trove of roughly half a billion chat messages. US law enforcement has used the data in multiple drug-trafficking prosecutions, particularly to pursue alleged smugglers who transport cocaine with commercial ships and speedboats.

    [ad_2]

    Source link

  • An AWS Configuration Issue Could Expose Thousands of Web Apps

    An AWS Configuration Issue Could Expose Thousands of Web Apps

    [ad_1]

    A vulnerability related to Amazon Web Service’s traffic-routing service known as Application Load Balancer could have been exploited by an attacker to bypass access controls and compromise web applications, according to new research. The flaw stems from a customer implementation issue, meaning it isn’t caused by a software bug. Instead, the exposure was introduced by the way AWS users set up authentication with Application Load Balancer.

    Implementation issues are a crucial component of cloud security in the same way that the contents of an armored safe aren’t protected if the door is left ajar. Researchers from the security firm Miggo found that, depending on how Application Load Balancer authentication was set up, an attacker could potentially manipulate its handoff to a third-party corporate authentication service to access the target web application and view or exfiltrate data.

    The researchers say that looking at publicly reachable web applications, they have identified more than 15,000 that appear to have vulnerable configurations. AWS disputes this estimate, though, and says that “a small fraction of a percent of AWS customers have applications potentially misconfigured in this way, significantly fewer than the researchers’ estimate.” The company also says that it has contacted each customer on its shorter list to recommend a more secure implementation. AWS does not have access or visibility into its clients’ cloud environments, though, so any exact number is just an estimate.

    The Miggo researchers say they came across the problem while working with a client. This “was discovered in real-life production environments,” Miggo CEO Daniel Shechter says. “We observed a weird behavior in a customer system—the validation process seemed like it was only being done partially, like there was something missing. This really shows how deep the interdependencies go between the customer and the vendor.”

    To exploit the implementation issue, an attacker would set up an AWS account and an Application Load Balancer, and then sign their own authentication token as usual. Next, the attacker would make configuration changes so it would appear their target’s authentication service issued the token. Then the attacker would have AWS sign the token as if it had legitimately originated from the target’s system and use it to access the target application. The attack must specifically target a misconfigured application that is publicly accessible or that the attacker already has access to, but would allow them to escalate their privileges in the system.

    Amazon Web Services says that the company does not view token forging as a vulnerability in Application Load Balancer because it is essentially an expected outcome of choosing to configure authentication in a particular way. But after the Miggo researchers first disclosed their findings to AWS at the beginning of April, the company made two documentation changes geared at updating their implementation recommendations for Application Load Balancer authentication. One, from May 1, included guidance to add validation before Application Load Balancer will sign tokens. And on July 19, the company also added an explicit recommendation that users set their systems to receive traffic from only their own Application Load Balancer using a feature called “security groups.”

    [ad_2]

    Source link