In almost all U.S. state and federal data breach disclosure laws, a loophole lets a retailer avoid disclosure if law enforcement says it would help the investigation to keep the breach secret. The U.S. Security and Exchange Commission (SEC), however, now has no such exemption.
This is huge. It means that if the Secret Service or FBI tells a chain to keep an incident secret or else risk disrupting an active investigation, a company that complies—and keeps word of the breach out of SEC filings—may be guilty of SEC fraud. In this federal agency versus federal/state agency tug of war, the retailer victim may be victimized again.
The SEC guidance recognizes that significant cybersecurity incidents should be reported promptly. This suggests that if a regulated company discovers an incident "after the balance sheet date but before the issuance of financial statements," it should consider going to the extraordinary length of issuing a separate disclosure just of the incident itself, along with an estimate of its financial impact.
Thus, companies may be in the unusual position of not disclosing an attack to affected customers, because the FBI or Secret Service has asked them not to, but having to then issue an SEC disclosure of a "material nonrecognized subsequent event" to the investing public.
Moreover, it is unlikely that law enforcement could even ask a company to refrain from a disclosure, because failing to disclose a material fact (like a major cyber breach) would constitute "fraud in connection with the purchase or sale of securities," and it is unlikely that the cops could tell a company, "go ahead, defraud your shareholders and the investing public." Yet another conundrum for regulated entities.
Other problems with SEC disclosure are both timing and detail. To make a meaningful disclosure might expose the company to even more attacks. If you say, for example, "we completed an audit of our PCI payment systems, found them completely vulnerable to attacks and there is nothing we can do about it," not only would shareholders rightfully panic, but cyberthieves would rejoice. The more detailed (and meaningful) the disclosure, the more it exposes the company to further attack and, therefore, further diminution of share value. Is that really what investors want?
It doesn't seem to be what companies want. Last week, a report by Reuters indicated people were shocked, shocked to find out that most companies were not reporting cybersecurity incidents. The Reuters report quoted President Obama's former cybersecurity policy advisor, Melissa Hathaway, and Stu Baker, a former top DHS official, as wondering why companies are mostly not reporting these attacks to shareholders. This follows an Oct. 11, 2011, guidance by the SEC that publicly traded or regulated entities should disclose such attacks where they would materially affect the value of the company. The key word here is "materiality."
In the movie Apollo 13, there's a debate about whether or not to tell the astronauts in the crippled craft that their trajectory may be too shallow and that they might just bounce off the Earth's atmosphere into space. The flight director asks if there is anything the astronauts can do about it and is told, "not now." He replies: "Then they don't need to know, do they?" The same is true for cybersecurity events.
Ask yourself the question: "If a company I invested in had a cybersecurity incident, would I want to know?" If the company is a retailer or payment processor, and the incident involved PCI cardmember data, then the odds are very good that a disclosure would already have been made—at least to the affected parties, if not to shareholders. The same is true if the incident involved health data from a provider, payer or business associate.In other cases of data breaches (or even vulnerabilities), the test for public disclosure to shareholders is whether the breach itself would materially affect the share price or the decision of whether to invest. If a breach compromised a key manufacturing plant or a critical trade secret or if it disrupted manufacturing or production of either a key component or for a significant period of time, then a disclosure would be warranted.
Similarly, if a company discovered that the cost of responding to a breach (including investigation, response, forensics and remediation) would be significant, then a disclosure probably should be made. If a company finds a serious vulnerability (even if there is no exploit yet) and determines it will have to spend a billion dollars next year to fix the problem, then for almost any company that information would be on the list for possible disclosure. The test is materiality, and the SEC guidance says as much.
For some companies, the problem is reputational risk. Although a breach to a major bank or brokerage house of even 1 million, or even 10 million, dollars may not begin to make a dent to that company's bottom line, the fact that the bank or brokerage had a vulnerability might, in and of itself, be considered significant. Banks aren't supposed to be able to be broken into, right?
The problem in such cases is that the disclosure itself might cause the loss of confidence in the bank. (The truth is that disclosures rarely cause such loss of confidence over time, but that is the fear anyway.) The loss of confidence might lead to loss of sales/investors and, therefore, might lead to a drop in share price. Indeed, the disclosure of the breach might cause a more material drop in share price than the breach itself.
Unfortunately (or fortunately), companies subject to SEC rules have little leeway. If the breach or vulnerability is material, it should be disclosed under SEC guidance, even if the loss of confidence resulting from the breach will cause even greater loss. The poor shareholders who continue to hold the stock after the disclosure may suffer a double whammy of loss from the breach and loss from the disclosure.
There are footnotes galore here. First, the law enforcement exemption lends itself to abuse. If any company executive asks a criminal investigator (be it Secret Service, FBI, local police, etc.) whether or not it would make things easier for their probe to keep the details quiet, the answer will invariably be "sure." The less a suspect knows about what law enforcement knows, the better.
There is no criteria to differentiate between a true need for secrecy—such as a suspect in the middle of a sting operation—and one where secrecy is convenient. In short, if a retailer wants an excuse to keep the breach quiet, all the company needs is one of the investigators to offer up that "sure." It doesn't cost those folks anything, and they generally have to answer to no one about it.
Without limitations, such law enforcement exemptions make many data breach rules close to meaningless. That's true if the intent is to force a retailer to disclose that which it does not want to disclose. The SEC's rules go to the other extreme.
Materiality specifically refers to likely and foreseeable stock price impact. But the intent of SEC disclosures goes beyond that. Would it be useful for a potential investor in a major retail chain to know that the chain suffered nine significant data breaches this year? Consumers tend to be apathetic about such matters, but investors might feel differently. Would not a more reasonable SEC definition be "information that would likely cause an investor or a potential investor to act differently?"
If you disagree with me, I'll see you in court, buddy. If you agree with me, however, I would love to hear from you.