The Danger Of Assuming Perfection

In last week's lead story, PCI Columnist Walter Conway wrote a hard-hitting column questioning whether--under very limited circumstances--carelessly used encryption might actually weaken a retailer's data security. In security circles, it's heresy to question encryption and, predictably, the emotional reaction to the column was intense.

It's not often that people challenge our technical conclusions while simultaneously questioning the marital status of our mothers. The column suffered from one key technical error, questioning how easy it would be to extract clues to an encryption key if you encrypted the short payment card expiration date field. Walt admitted that error--and explained the context--in his column this week. (By the way, if anyone else wants to yell us at, this week has a column from Frank Hayes that questions the very premise of security passwords. Gluttons for punishment we be, a rare breed of journalistic masochists.) But there's a bigger issue at play here, a long-standing technology frustration beneath the emotions.

The fundamental challenge to Walt's column is that the security holes described are irrelevant for any system that is properly protected with professionally managed cryptography. As a practical matter, I think that's a fair point. The attacks Walt described can be thwarted by the proper defenses. But how many retail systems are protected by such properly managed systems?

There are only so many fully trained cryptographers and, by definition, not all of them are the best. If there are only a handful on an IT team, it stands to reason they are being supervised by people who know far less about cryptography than they do. With the workload piling on, isn't possible that some cryptographers may cut corners, knowing that no one else on the team would notice?

The point is, it's dangerous for senior IT managers to make security--or any other technology--decisions based on the premise that all security implementations are done flawlessly. For such an assumption to be valid, that perfect work must be performed by every team member (and, potentially, their predecessors), along with the coders from every partner in the supply chain, including, of course, payment processors.

Walt's original premise was that encrypting short fields--especially ones that are relatively easy to guess--is unnecessary and that it's asking for trouble. Why do it? Why not make decisions based on the belief that everything may not be perfectly crafted?

This issue isn't merely one of encryption. In last month's eight-day-long set of uptime headaches for American Eagle Outfitters, the problem turned out to involve an IBM team, a remarkably unlikely set of concurrent server failures and a disaster recovery site that had not been properly maintained. Do you honestly think that no one at IBM cut a few corners? After all, what are the odds of both servers dying at the same time? "That disaster recovery won't likely be needed, so who will notice if we don't keep on top of it? We'll get to it next week."

Server log manual reviews are another area where IT staff will often, little by little, cut back on time spent--especially as the holiday rush kicks in and every IT staffer is needed 24x7. Remember this golden oldie: TJX's IT team didn't notice Gonzalez's team moving 80 gigabytes of data out the door. Or this classic from Wal-Mart: An account for a former employee was left active, thus allowing the thieves to use it repeatedly for--wait for it--17 months.

What if Wal-Mart had assumed this ploy couldn't work against the company because accounts are always terminated as soon as an employee leaves? "See? It says so right here in the manual." Or, what if TJX felt invulnerable to a Gonzalez type assault because someone was paid to monitor the logs?

The encryption scenario Walt painted may not have worked in a cryptographically perfect environment. In the real world, though, if it was me, I'd stop encrypting short easily guessed fields. But that's just me.