Tag: hacks

  • Millions of Vehicles Could Be Hacked and Tracked Thanks to a Simple Website Bug

    Millions of Vehicles Could Be Hacked and Tracked Thanks to a Simple Website Bug

    [ad_1]

    In January 2023, they published the initial results of their work, an enormous collection of web vulnerabilities affecting Kia, Honda, Infiniti, Nissan, Acura, Mercedes-Benz, Hyundai, Genesis, BMW, Rolls Royce, and Ferrari—all of which they had reported to the automakers. For at least half a dozen of those companies, the web bugs the group found offered at least some level of control of cars’ connected features, they wrote, just as in their latest Kia hack. Others, they say, allowed unauthorized access to data or the companies’ internal applications. Still others targeted fleet management software for emergency vehicles and could have even prevented those vehicles from starting, they believe—though they didn’t have the means to safely test out that potentially dangerous trick.

    In June of this year, Curry says, he discovered that Toyota appeared to still have a similar flaw in its web portal that, in combination with a leaked dealer credential he found online, would have allowed remote control of Toyota and Lexus vehicles’ features like tracking, unlocking, honking, and ignition. He reported that vulnerability to Toyota and showed WIRED a confirmation email seeming to demonstrate that he’d been able to reassign himself control of a target Toyota’s connected features over the web. Curry didn’t film a video of that Toyota hacking technique before reporting it to Toyota, however, and the company quickly patched the bug he’d disclosed, even temporarily taking its web portal offline to prevent its exploitation.

    “As a result of this investigation, Toyota promptly disabled the compromised credentials and is accelerating security enhancements of the portal, as well as temporarily disabling the portal until enhancements are complete,” a Toyota spokesperson wrote to WIRED in June.

    More Smart Features, More Dumb Bugs

    The extraordinary number of vulnerabilities in carmakers’ websites that allow remote control of vehicles is a direct result of companies’ push to appeal to consumers—particularly young ones—with smartphone-enabled features, says Stefan Savage, a professor of computer science at UC San Diego whose research team was the first to hack a car’s steering and brakes over the internet in 2010. “Once you have these user features tied into the phone, this cloud-connected thing, you create all this attack surface you didn’t have to worry about before,” Savage says.

    Still, he says, even he is surprised at the insecurity of all the web-based code that manages those features. “It’s a little disappointing that it’s as easy to exploit as it has been,” he says.

    Rivera says he’s observed firsthand in his time working in automotive cybersecurity that car companies often put more focus on “embedded” devices—digital components in non-traditional computing environments like cars—rather than web security, in part because updating those embedded devices can be far more difficult and lead to recalls. “It was clear ever since I started that there was a glaring gap between embedded security and web security in the auto industry,” Rivera says. “These two things mix together very often, but people only have experience in one or the other.”

    UCSD’s Savage hopes that the Kia-hacking researchers’ work might help shift that focus. Many of the early, high-profile hacking experiments that affected cars’ embedded systems, like the 2015 Jeep takeover and the 2010 Impala hack pulled off by Savage’s team at UCSD, persuaded automakers that they needed to better prioritize embedded cybersecurity, he says. Now car companies need to focus on web security too—even, he says, if it means making sacrifices or changes to their process.

    “How do you decide, ‘We’re not going to ship the car for six months because we didn’t go through the web code?’ That’s a a tough sell,” he says. “I would like to think this kind of event causes people to look at that decision more fully.”

    [ad_2]

    Source link

  • Some Mad Genius Put ChatGPT on a TI-84 Graphing Calculator

    Some Mad Genius Put ChatGPT on a TI-84 Graphing Calculator

    [ad_1]

    On Saturday, a YouTube creator called ChromaLock published a video detailing how he modified a Texas Instruments TI-84 graphing calculator to connect to the internet and access OpenAI’s ChatGPT, potentially enabling students to cheat on tests. The video, titled “I Made the Ultimate Cheating Device,” demonstrates a custom hardware modification that allows users of the graphing calculator to type in problems sent to ChatGPT using the keypad and receive live responses on the screen.

    ChromaLock began by exploring the calculator’s link port, typically used for transferring educational programs between devices. He then designed a custom circuit board he calls “TI-32” that incorporates a tiny Wi-Fi-enabled microcontroller, the Seed Studio ESP32-C3 (which costs about $5), along with other components to interface with the calculator’s systems.

    It’s worth noting that the TI-32 hack isn’t a commercial project. Replicating ChromaLock’s work would involve purchasing a TI-84 calculator, a Seed Studio ESP32-C3 microcontroller, and various electronic components, and fabricating a custom PCB based on ChromaLock’s design, which is available online.

    The creator says he encountered several engineering challenges during development, including voltage incompatibilities and signal integrity issues. After developing multiple versions, ChromaLock successfully installed the custom board into the calculator’s housing without any visible signs of modifications from the outside.

    To accompany the hardware, ChromaLock developed custom software for the microcontroller and the calculator, which is available open source on GitHub. The system simulates another TI-84, allowing people to use the calculator’s built-in “send” and “get” commands to transfer files. This allows a user to easily download a launcher program that provides access to various “applets” designed for cheating.

    One of the applets is a ChatGPT interface that might be most useful for answering short questions, but it has a drawback in that it’s slow and cumbersome to type in long alphanumeric questions on the limited keypad.

    Beyond the ChatGPT interface, the device offers several other cheating tools. An image browser allows users to access pre-prepared visual aids stored on the central server. The app browser feature enables students to download not only games for post-exam entertainment but also text-based cheat sheets disguised as program source code. ChromaLock even hinted at a future video discussing a camera feature, though details were sparse in the current demo.

    ChromaLock claims his new device can bypass common anti-cheating measures. The launcher program can be downloaded on-demand, avoiding detection if a teacher inspects or clears the calculator’s memory before a test. The modification can also supposedly break calculators out of Test Mode, a locked-down state used to prevent cheating.

    While the video presents the project as a technical achievement, consulting ChatGPT during a test on your calculator almost certainly represents an ethical breach and/or a form of academic dishonesty that could get you in serious trouble at most schools. So tread carefully, study hard, and remember to eat your Wheaties.

    This story originally appeared on Ars Technica.

    [ad_2]

    Source link

  • Your Phone Won’t Be the Next Exploding Pager

    Your Phone Won’t Be the Next Exploding Pager

    [ad_1]

    Amid ongoing violent conflict with Israel, Hezbollah’s digital communications and activities are also under constant barrage from Israeli hackers. In fact, this constant digital assault reportedly played a role in pushing Hezbollah away from smartphone communication and toward pagers and walkie-talkies in the first place. “Your phone is their agent,” Hezbollah leader Hassan Nasrallah said in February, referring to Israel.

    The commercial spyware industry has shown it is possible to fully compromise target smartphones by exploiting chains of vulnerabilities in their mobile operating systems. Developing spyware and repeatedly finding new operating system vulnerabilities as older ones are patched is a resource-intensive process, but it is still less complicated and risky than conducting a hardware supply chain attack to physically compromise devices during or shortly after manufacturing. And for an attacker, monitoring a target’s entire digital life on a smartphone or laptop is likely more valuable than the device’s potential as a bomb.

    “I’d hazard a guess that the only reason we aren’t hearing about exploding laptops is that they’re collecting too much intelligence from those,” says Jake Williams, vice president of research and development at Hunter Strategy, who formerly worked for the US National Security Agency. “I think there’s also potentially an element of targeting, too. The pagers and personal radios could pretty reliably be expected to stay in the hands of Hezbollah operatives, but more general purpose electronics like laptops could not.”

    There are other more practical reasons, too, that the attacks in Lebanon are unlikely to portend a global wave of exploding consumer electronics anytime soon. Unlike portable devices that were originally designed in the 20th century, the current generation of laptops and particularly smartphones are densely packed with hardware components to offer the most features and the longest battery life in the most efficient package possible.

    University of Surrey’s Woodward, who regularly takes apart consumer devices, points out that within modern smartphones there is very limited space to insert anything extra, and the manufacturing process can involve robots precisely placing components on top of each other. X-rays show how tightly packed modern phones are.

    “When you open up a smartphone, I think the only way to get any sort of meaningful amount of high explosive in there would be to do something like replace one of the components,” he says, such as modifying a battery to be half battery, half explosives. But “replacing a component in a smartphone would compromise its functionality,” he says, which could lead a user to investigate the malfunction.

    In contrast, the model of pager linked to the explosions—a “rugged” device with 85 days of battery life—included multiple replaceable parts. Ang Cui, founder of the embedded device security firm Red Balloon Security, examined the schematics of the pager model apparently used in the attacks and told WIRED that there would be free space inside to plant explosives. The walkie-talkies that exploded, according to the manufacturer, were discontinued a decade ago. Woodward says that when opening up redesigned, current versions of older technologies, such as pagers, many internal electronic components have been “compressed” down as manufacturing methods and processor efficiency have improved.

    [ad_2]

    Source link

  • Did a Chinese University Hacking Competition Target a Real Victim?

    Did a Chinese University Hacking Competition Target a Real Victim?

    [ad_1]

    Capture the flag hacking contests at security conferences generally serve two purposes: to help participants develop and demonstrate computer hacking and security skills, and to assist employers and government agencies with discovering and recruiting new talent.

    But one security conference in China may have taken its contest a step further—potentially using it as a secret espionage operation to get participants to collect intelligence from an unknown target.

    According to two Western researchers who translated documentation for China’s Zhujian Cup, also known as the National Collegiate Cybersecurity Attack and Defense Competition, one part of the three-part competition, held last year for the first time, had a number of unusual characteristics that suggest its potentially secretive and unorthodox purpose.

    Capture the flag (CTF) and other types of hacking competitions are generally hosted on closed networks or “cyber ranges”—dedicated infrastructure set up for the contest so that participants don’t risk disrupting real networks. These ranges provide a simulated environment that mimics real-world configurations, and participants are tasked with finding vulnerabilities in the systems, obtaining access to specific parts of the network, or capturing data.

    There are two major companies in China that set up cyber ranges for competitions. The majority of the competitions give a shout out to the company that designed their range. Notably, Zhujian Cup didn’t mention any cyber range or cyber range provider in its documentation, leaving the researchers to wonder if this is because the contest was held in a real environment rather than a simulated one.

    The competition also required students to sign a document agreeing to several unusual terms. They were prohibited from discussing the nature of the tasks they were asked to do in the competition with anyone; they had to agree not to destroy or disrupt the targeted system; and at the end of the competition, they had to delete any backdoors they planted on the system and any data they acquired from it. And unlike other competitions in China the researchers examined, participants in this portion of the Zhujian Cup were prohibited from publishing social media posts revealing the nature of the competition or the tasks they performed as part of it.

    Participants also were prohibited from copying any data, documents, or printed materials that were part of the competition; disclosing information about vulnerabilities they found; or exploiting those vulnerabilities for personal purposes. If a leak of any of this data or material occurred and caused harm to the contest organizers or to China, according to the pledge that participants signed, they could be held legally responsible.

    “I promise that if any information disclosure incident (or case) occurs due to personal reasons, causing loss or harm to the organizer and the country, I, as an individual, will bear legal responsibility in accordance with the relevant laws and regulations,” the pledge states.

    The contest was hosted last December by Northwestern Polytechnical University, a science and engineering university in Xi’an, Shaanxi, that is affiliated with China’s Ministry of Industry and Information Technology and also holds a top-secret clearance to conduct work for the Chinese government and military. The university is overseen by China’s People’s Liberation Army.

    [ad_2]

    Source link

  • An AWS Configuration Issue Could Expose Thousands of Web Apps

    An AWS Configuration Issue Could Expose Thousands of Web Apps

    [ad_1]

    A vulnerability related to Amazon Web Service’s traffic-routing service known as Application Load Balancer could have been exploited by an attacker to bypass access controls and compromise web applications, according to new research. The flaw stems from a customer implementation issue, meaning it isn’t caused by a software bug. Instead, the exposure was introduced by the way AWS users set up authentication with Application Load Balancer.

    Implementation issues are a crucial component of cloud security in the same way that the contents of an armored safe aren’t protected if the door is left ajar. Researchers from the security firm Miggo found that, depending on how Application Load Balancer authentication was set up, an attacker could potentially manipulate its handoff to a third-party corporate authentication service to access the target web application and view or exfiltrate data.

    The researchers say that looking at publicly reachable web applications, they have identified more than 15,000 that appear to have vulnerable configurations. AWS disputes this estimate, though, and says that “a small fraction of a percent of AWS customers have applications potentially misconfigured in this way, significantly fewer than the researchers’ estimate.” The company also says that it has contacted each customer on its shorter list to recommend a more secure implementation. AWS does not have access or visibility into its clients’ cloud environments, though, so any exact number is just an estimate.

    The Miggo researchers say they came across the problem while working with a client. This “was discovered in real-life production environments,” Miggo CEO Daniel Shechter says. “We observed a weird behavior in a customer system—the validation process seemed like it was only being done partially, like there was something missing. This really shows how deep the interdependencies go between the customer and the vendor.”

    To exploit the implementation issue, an attacker would set up an AWS account and an Application Load Balancer, and then sign their own authentication token as usual. Next, the attacker would make configuration changes so it would appear their target’s authentication service issued the token. Then the attacker would have AWS sign the token as if it had legitimately originated from the target’s system and use it to access the target application. The attack must specifically target a misconfigured application that is publicly accessible or that the attacker already has access to, but would allow them to escalate their privileges in the system.

    Amazon Web Services says that the company does not view token forging as a vulnerability in Application Load Balancer because it is essentially an expected outcome of choosing to configure authentication in a particular way. But after the Miggo researchers first disclosed their findings to AWS at the beginning of April, the company made two documentation changes geared at updating their implementation recommendations for Application Load Balancer authentication. One, from May 1, included guidance to add validation before Application Load Balancer will sign tokens. And on July 19, the company also added an explicit recommendation that users set their systems to receive traffic from only their own Application Load Balancer using a feature called “security groups.”

    [ad_2]

    Source link

  • The Slow-Burn Nightmare of the National Public Data Breach

    The Slow-Burn Nightmare of the National Public Data Breach

    [ad_1]

    Data breaches are a seemingly endless scourge with no simple answer, but the breach in recent months of background check service National Public Data illustrates just how dangerous and intractable they have become. And after four months of ambiguity, the situation is only now beginning to come into focus with National Public Data finally acknowledging the breach on Monday just as a trove of the stolen data leaked publicly online.

    In April, a hacker known for selling stolen information, known as USDoD, began hawking a trove of data on cybercriminal forums for $3.5 million that they said included 2.9 billion records and impacted “the entire population of USA, CA and UK.” As the weeks went on, samples of the data started cropping up as other actors and legitimate researchers worked to understand its source and validate the information. By early June, it was clear that at least some of the data was legitimate and contained information like names, emails, and physical addresses in various combinations.

    The data isn’t always accurate, but it seems to involve two troves of information. One that includes more than 100 million legitimate email addresses along with other information and a second that includes Social Security numbers but no email addresses.

    “There appears to have been a data security incident that may have involved some of your personal information,” National Public Data wrote on Monday. “The incident is believed to have involved a third-party bad actor that was trying to hack into data in late December 2023, with potential leaks of certain data in April 2024 and summer 2024. … The information that was suspected of being breached contained name, email address, phone number, social security number, and mailing address(es).”

    The company says it has been cooperating with “law enforcement and governmental investigators.” NPD is facing potential class action lawsuits over the breach.

    “We have become desensitized to the never-ending leaks of personal data, but I would say there is a serious risk,” says security researcher Jeremiah Fowler, who has been following the situation with National Public Data. “It may not be immediate and it could take years for one of the many criminal actors to successfully figure out how to use this information, but the bottom line is that a storm is coming.”

    When information is stolen from a single source, like Target customer data being stolen from Target, it’s relatively straightforward to establish that source. But when information is stolen from a data broker and the company doesn’t come forward about the incident, it’s much more complicated to determine whether the information is legitimate and where it came from. Typically, people whose data is compromised in a breach—the true victims—aren’t even aware that National Public Data held their information in the first place.

    In a blog post on Wednesday about the contents and provenance of the National Public Data trove, security researcher Troy Hunt wrote, “The only parties that know the truth are the anonymous threat actors passing the data around and the data aggregator. … We’re left with 134M email addresses in public circulation and no clear origin or accountability.”

    [ad_2]

    Source link

  • Nearly All Google Pixel Phones Exposed by Unpatched Flaw in Hidden Android App

    Nearly All Google Pixel Phones Exposed by Unpatched Flaw in Hidden Android App

    [ad_1]

    Google’s flagship Pixel smartphone line touts security as a centerpiece feature, offering guaranteed software updates for seven years and running stock Android that’s meant to be free of third-party add-ons and bloatware. On Thursday, though, researchers from the mobile device security firm iVerify are publishing findings on an Android vulnerability that seems to have been present in every Android release for Pixel since September 2017 and could expose the devices to manipulation and takeover.

    The issue relates to a software package called “Showcase.apk” that runs at the system level and lurks invisible to users. The application was developed by the enterprise software company Smith Micro for Verizon as a mechanism for putting phones into a retail store demo mode—it is not Google software. Yet for years, it has been in each Android release for Pixel and has deep system privileges, including remote code execution and remote software installation. Even riskier, the application is designed to download a configuration file over an unencrypted HTTP web connection that iVerify researchers say could be hijacked by an attacker to take control of the application and then the entire victim device.

    iVerify disclosed its findings to Google at the beginning of May, and the tech giant has not yet released a fix for the issue. Google spokesperson Ed Fernandez tells WIRED in a statement that Showcase “is no longer being used” by Verizon, and Android will remove Showcase from all supported Pixel devices with a software update “in the coming weeks.” He added that Google has not seen evidence of active exploitation and that the app is not present in the new Pixel 9 series devices that Google announced this week. Verizon and Smith Micro did not respond to WIRED’s requests for comment ahead of publication.

    “I’ve seen a lot of Android vulnerabilities, and this one is unique in a few ways and quite troubling,” says Rocky Cole, chief operating officer of iVerify and a former US National Security Agency analyst. “When Showcase.apk runs, it has the ability to take over the phone. But the code is, frankly, shoddy. It raises questions about why third-party software that runs with such high privileges so deep in the operating system was not tested more deeply. It seems to me that Google has been pushing bloatware to Pixel devices around the world.”

    iVerify researchers discovered the application after the company’s threat-detection scanner flagged an unusual Google Play Store app validation on a user’s device. The customer, big data analytics company Palantir, worked with iVerify to investigate Showcase.apk and disclose the findings to Google. Palantir chief information security officer Dane Stuckey says that the discovery and what he describes as Google’s slow, opaque response has prompted Palantir to phase out not just Pixel phones, but all Android devices across the company.

    “Google embedding third-party software in Android’s firmware and not disclosing this to vendors or users creates significant security vulnerability to anyone who relies on this ecosystem,” Stuckey tells WIRED. He added that his interactions with Google throughout the standard 90-day disclosure window “severely eroded our trust in the ecosystem. To protect our customers, we have had to make the difficult decision to move away from Android in our enterprise.”

    [ad_2]

    Source link

  • The Hacker Who Hunts Video Game Speedrunning Cheaters

    The Hacker Who Hunts Video Game Speedrunning Cheaters

    [ad_1]

    The night before Cecil’s Defcon talk, Maselewski wrote in a final email to WIRED that he believes those alleging that he cheated are using faulty tools with an incomplete picture of Diablo‘s complexities. “Dwango is out to tell a story. Did I cheat? No,” Maselewski writes. “But what is true or not does not matter at this point, because the wonder of exploration has already overstayed its welcome for a small group of people, and the script has already been written.”

    When WIRED reached out to the Guinness Book of World Records to ask if it would take down Maselewski’s record, a spokesperson responded noncommittally that “we value any feedback on our record titles and are committed to maintaining the highest standards of accuracy.” An administrator for Speed Demos Archive or SDA, another speedrun record-keeping website where Maselewski holds a similar Diablo record, seemed to be more persuaded by Cecil’s evidence. That administrator, who goes by the handle “ktwo” and asked that WIRED not include their real name, says that SDA hasn’t officially reached a verdict and is still waiting to hear Maselewski’s explanation.

    Things are not looking good for groobo, however. “To be clear, we have made a preliminary decision, based on the available information,” ktwo writes “The staff agrees that the analysis raises questions about the validity of the run that need to be addressed, or else the run will be unpublished from SDA. The admin team is currently discussing these questions with the runner. Once that discussion has concluded, a final decision will be made.”

    Cecil’s involvement in investigating gaming records began in 2017, when the speedrunner Eric “Omnigamer” Koziel, who was writing a book about speedrunning, began re-examining a record set by Todd Rogers for the Atari 2600 racing game Dragster. Rogers’ record time, 5.51 seconds, had persisted for a remarkable 35 years. But when Koziel reverse engineered Dragster’s code to try to understand how Rogers had achieved that time, he found that tricks Rogers said he’d used—such as starting the game in second gear—wouldn’t have provided the advantage Rogers claimed.

    “The goal was never to point to someone and say, ‘Hey, they’re cheating,’” says Koziel. “It was to try to find the truth.”

    Cecil, who knew Koziel from the speedrun community, offered to help develop a tool-assisted speedrun they could replay via TASbot on a real Atari 2600 to show that, even on that original hardware, Rogers’ record was impossible. They found that TASbot’s theoretically perfect performance was 5.57 seconds, slower than Rogers’ alleged time. Despite Rogers’ objections, his three-and-a-half-decade-old record was erased from the annals of the gaming records keeper Twin Galaxies—along with all his other records on the site—and Guinness stripped his world record for “longest-standing video game record.”

    “Although I disagree with their decision, I must applaud them for their strong stance on the matter of cheating,” Rogers wrote in a lengthy public Facebook post responding to the Twin Galaxies decision.

    [ad_2]

    Source link

  • Flaws in Ubiquitous ATM Software Could Have Let Attackers Take Over Cash Machines

    Flaws in Ubiquitous ATM Software Could Have Let Attackers Take Over Cash Machines

    [ad_1]

    There is a grand tradition at the annual Defcon security conference in Las Vegas of hacking ATMs. Unlocking them with safecracking techniques, rigging them to steal users’ personal data and PIN numbers, crafting and refining ATM malware and, of course, hacking them to spit out all their cash. Many of these projects targeted what are known as retail ATMs, freestanding devices like those you’d find at a gas station or a bar. But on Friday, independent researcher Matt Burch is presenting findings related to the “financial” or “enterprise” ATMs used in banks and other large institutions.

    Burch is demonstrating six vulnerabilities in ATM-maker Diebold Nixdorf’s widely deployed security solution, known as Vynamic Security Suite (VSS). The vulnerabilities, which the company says have all been patched, could be exploited by attackers to bypass an unpatched ATM’s hard drive encryption and take full control of the machine. And while there are fixes available for the bugs, Burch warns that, in practice, the patches may not be widely deployed, potentially leaving some ATMs and cash-out systems exposed.

    “Vynamic Security Suite does a number of things—it has endpoint protection, USB filtering, delegated access, and much more,” Burch tells WIRED. “But the specific attack surface that I’m taking advantage of is the hard drive encryption module. And there are six vulnerabilities because I would identify a path and files to exploit, and then I would report it to Diebold, they would patch that issue, and then I would find another way to achieve the same outcome. They’re relatively simplistic attacks.”

    The vulnerabilities Burch found are all in VSS’s functionality to turn on disk encryption for ATM hard drives. Burch says that most ATM manufacturers rely on Microsoft’s BitLlocker Windows encryption for this purpose, but Diebold Nixdorf’s VSS uses a third-party integration to run an integrity check. The system is set up in a dual-boot configuration that has both Linux and Windows partitions. Before the operating system boots, the Linux partition runs a signature integrity check to validate that the ATM hasn’t been compromised, and then boots it into Windows for normal operation.

    “The problem is, in order to do all of that, they decrypt the system, which opens up the opportunity,” Burch says. “The core deficiency that I’m exploiting is that the Linux partition was not encrypted.”

    Burch found that he could manipulate the location of critical system validation files to redirect code execution; or, in other words, grant himself control of the ATM.

    Diebold Nixdorf spokesperson Michael Jacobsen tells WIRED that Burch first disclosed the findings to them in 2022 and that the company has been in touch with Burch about his Defcon talk. The company says that the vulnerabilities Burch is presenting were all addressed with patches in 2022. Burch notes, though, that as he went back to the company with new versions of the vulnerabilities over the past couple of years, his understanding is that the company continued to address some of the findings with patches in 2023. And Burch adds that he believes Diebold Nixdorf addressed the vulnerabilities on a more fundamental level in April with VSS version 4.4 that encrypts the Linux partition.

    [ad_2]

    Source link

  • How Hackers Extracted the ‘Keys to the Kingdom’ to Clone HID Keycards

    How Hackers Extracted the ‘Keys to the Kingdom’ to Clone HID Keycards

    [ad_1]

    Finally, HID says that “to its knowledge,” none of its encoder keys have leaked or been distributed publicly, and “none of these issues have been exploited at customer locations and the security of our customers has not been compromised.”

    Javadi counters that there’s no real way to know who might have secretly extracted HID’s keys, now that their method is known to be possible. “There are a lot of smart people in the world,” Javadi says. “It’s unrealistic to think we’re the only people out there who could do this.”

    Despite HID’s public advisory more than seven months ago and the software updates it released to fix the key-extraction problem, Javadi says most of the clients whose systems he’s tested in his work don’t appear to have implemented those fixes. In fact, the effects of the key extraction technique may persist until HID’s encoders, readers, and hundreds of millions of keycards are reprogrammed or replaced worldwide.

    Time to Change the Locks

    To develop their technique for extracting the HID encoders’ keys, the researchers began by deconstructing its hardware: They used an ultrasonic knife to cut away a layer of epoxy on the back of an HID reader, then heated the reader to desolder and pull off its protected SAM chip. Then they put that chip into their own socket to watch its communications with a reader. The SAM in HID’s readers and encoders are similar enough that this let them reverse engineer the SAM’s commands.

    Ultimately, that hardware hacking allowed them to develop a much cleaner, wireless attack: They wrote their own program to tell an encoder to send its SAM’s secrets to a configuration card without encrypting that sensitive data—while an RFID “sniffer” device sat between the encoder and the card, reading HID’s keys in transit.

    HID systems and other forms of RFID keycard authentication have, in fact, been cracked repeatedly, in various ways, in recent decades. But vulnerabilities like the ones set to be presented at Defcon may be particularly tough to fully protect against. “We crack it, they fix it. We crack it, they fix it,” says Michael Glasser, a security researcher and the founder of Glasser Security Group, who has discovered vulnerabilities in access control systems since as early as 2003. “But if your fix requires you to replace or reprogram every reader and every card, that’s very different from a normal software patch.”

    On the other hand, Glasser notes that preventing keycard cloning represents just one layer of security among many for any high-security facility—and practically speaking, most low-security facilities offer far easier ways to get in, such as asking an employee to hold a door open for you while you have your hands full. “Nobody says no to the guy holding two boxes of donuts and a box of coffee,” Glasser says.

    Javadi says the goal of their Defcon talk wasn’t to suggest that HID’s systems are particular vulnerable—in fact, they say they focused their years of research on HID specifically because of the challenge of cracking its relatively secure products—but rather to emphasize that no one should depend on any single technology for their physical security.

    Now that they have made clear that HID’s keys to the kingdom can be extracted, however, the company and its customers may nonetheless face a long and complicated process of securing those keys again. “Now customers and HID have to claw back control—and change the locks, so to speak,” Javadi says. “Changing the locks is possible. But it’s going to be a lot of work.”

    [ad_2]

    Source link