The argument was launched as a blog post on Oracle's site under the name of Mary Ann Davidson, Oracle's Chief Security Officer, on March 28. The source of irritation is a 2010 change to the PCI Payment Application Vendor Release Agreement (VRA).
It's a change, Davidson wrote, that "imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose—dare we say 'tell all?'—to PCI any known security vulnerabilities and associated security breaches involving Validated Payment Applications (VPA). Think about the impact of that. PCI is asking a vendor to disclose specific details of security vulnerabilities, including exploit information or technical details of the vulnerability and whether or not there is any mitigation available, as in a patch."
Davidson continued, saying that PCI retains "the right to blab about any and all of the above—specifically, to distribute all the gory details of what is disclosed—to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can't be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? Common sense tells us that telling lots of people a secret is guaranteed to unsecret the secret."
She added: "Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only 'vetting' is that they are members of a PCI consortium?"
Oracle's case is well articulated, but there is another side to this. Not every hole is patched right away. Do not retailers have the right to know not only that a hole exists but enough details so they can make decisions? Maybe that app needs to be halted immediately, or maybe avoiding certain features or situations will provide adequate temporary protection? Could the hole's details solve a tech mystery a chain had, where a particular problem kept cropping up?
Not only did the Oracle post say that divulging widely could help the bad guys, but it questioned whether the details would meaningfully even help the good guys.
"Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround and, even if they do, most workarounds break some other functionality in the application or surrounding environment," Davidson said. "Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available, therefore, does not help users, and instead helps only those wanting to exploit known security bugs."Davidson said that Oracle itself practices a strict "need to know" approach with its security information. "We use our own row level access control to limit access to security bugs in our bug database, and thus less than one percent of development has access to this information," she wrote.
She also shared a global example of where Oracle was hurt by excessive security info sharing.
"One of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch," Davidson wrote. "The vulnerability was finally reported to Oracle by (drum roll) a U.S.-based commercial company, to whom the information had leaked. Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can't actually fix the problem."
Oracle also dealt with the pragmatic side of security patches, which is that it's most dangerous during the window after a patch has been publicly released and before everyone has had a chance to implement the patch. Davidson specifically argued that this policy from the PCI Council actually sharply increases that period of vulnerability.
The "current requirement for the widespread distribution of security vulnerability exploit details—at any time, but particularly before a vendor can issue a patch or a workaround—is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits—actually, any benefits—to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers," she wrote. "The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get "EZHack" handed to them in such a manner: a) a vulnerability; b) in a payment application; c) with exploit code. It's the Hacking Trifecta. It's fair to say that this is probably the exact opposite of what PCI—or any of us—would want."
Davidson painted the picture of a conscientious vendor—which Oracle often actually is, for whatever that is worth—who tends to security issues quickly and professionally. But that's not representative of the total vendor community. Given the seriousness of security, more information is often better.
Still, Oracle's argument that current PCI policy is sharing such data with far too many people is quite valid. Perhaps the best resolution is not to spike the policy but to limit the group with access? Perhaps a small group of retail IT security folk, something akin to the U.S. government's Gang of Eight—a group of congressional leaders who can be briefed on covert actions when wider congressional disclosure would be too dangerous.
Oracle argued in its full post that it has been trying to convince the PCI Council to abandon this policy for an extended period but that the Oracle questions "have gone, to date, unanswered."
A legitimate reason is the nature of the PCI Council. Given the number of people involved, it's a body that can't move very quickly. More to the point, the people who need to approve such a change are the very same ones who receive that information now and would be deprived of it if the provision was stricken.
Put another way, this request is saying to the PCI decision-making groups: "We don't trust you. Change the rule that says we have to tell you everything."
The argument that "this is the right thing to do" is a lot more effective when it's not directed at people you've just insulted. Gosh, I can't envision what could possibly be holding things up.