Two security stories showed up in my email box this morning.

One was the further revelations of unbelievably lax security handling by the NSA or related military intel agencies who managed to leave all sorts of highly sensitive data on an unsecured Amazon AWS instance. The other was the goatse-sized (no don’t search) gaping hole in Apple’s latest MacOS update that is just begging to be used by bad people everywhere.

Both of these are levels of bad that make Madam Hillary’s closet email server look like sensitive data handling best practice.

To start with the Apple one. As explained in some detail by Patrick Wardle, the bug seems to be a classic case of getting ones if/then/else conditionals wrong. We haven’t seen the actual source code but this seems to be not unadjacent to Apple’s GOTO FAIL bug a few years back in terms of what it shows about Apple’s software QA*. The problem we seem to have here is that firstly code reviews do not appear to have been used successfully – and code reviews are absolutely best practice because a second pair of eyes will often find bugs (see ESR’s dictum “Given enough eyeballs, all bugs are shallow”) that the original coder missed. This is related to why change control for many critical systems require a separate reviewer to the person proposing the change. Of course code reviews aren’t perfect for lots and lots of reasons, including the fact that very few CRs examine every line of code and that the reviewer is often a colleague who is familiar with the code and thus likely to also see what he expects rather than what is, so there should be proper QA as well.

QA is used to test both that things happen that are supposed to when the proper inputs are given and that erroneous inputs are handled by something that reports the error and doesn’t run the code. A good example of what this means is this old joke:

QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljkn;lohasg;lkhwe;rotijasndf;asjq);drop table *;oihrewt.asdklfjbalwkejthkladnf.asdfnhoqwuher;oiuwqyproi3po4rlnclkno;2iy3roiu23p5yp23ou5

Both Apple bugs show that effective QA didn’t happen. There were obvious cases for both (invalid certificate and non-existent user respectively) that should have been tested, and in fact not just tested but tested automatically as part of a test suite that runs on every change to the code in question. That’s very concerning because (pace O Wilde’s Lady Bracknell) while having one such mistake is unfortunate, having two looks like carelessness. If that is the case then it suggests that there are likely to be numerous other places in Apple’s code where similar gaping holes may lurk just waiting for someone to order a “sfdeljkn;lohasg;lkhwe;rotijasndf;asjq’);drop table *;oihrewt.asdklfjbalwkejthkladnf.asdfnhoqwuher;oiuwqyproi3po4rlnclkno;2iy3roiu23p5yp23ou5”

The unsecured data on AWS is similar sloppiness. Not that the NSA/INSCOM people (or their contractors) who did it are unique in being sloppy – a search of the web for “AWS unsecure data” produces numerous examples of other organizations also leaving stuff visible to everyone on AWS – but it seems to be particularly egregious when it is done on behalf of an organization that spies on the rest of the world in part by taking advantage of this kind of mistake. As the article notes, it isn’t clear who is to blame:

UpGuard did not determine whether INSCOM operators or contractors working for Invertix failed to secure the cloud data cache. Amazon Web Services is not to blame as it requires clients to establish their privacy settings.

But UpGuard said failure by contractors to secure data is a “silent killer” for the defense establishment.

I’d add to the “silent killer” thing, by pointing out that this is, yet again, an example of poor process and verification. An obvious thing to test for any repository of non-public data is to check whether some random stranger has access to it. It’s the sort of thing that a competent security agency would have as a standard part of any go live effort. The fact that INSCOM apparently didn’t check this sort of thing for at least 4 years is pretty pathetic. I can just about get that they might forget this in the early days of AWS but we’ve moved well beyond that now. Any and every organization, but particularly one with sensitive espionage data, ought by now to have requirements that contractors / internal teams document what public cloud resources they use and then have a team that validates access to them using automated tools that scan them.

Only “Top Men” could fail to understand this and implement it.

*Note that, like the ZDnet person in the previous link, I’m not complaining precisely about the ‘goto’ statement – it’s not wonderful but in cases like this where you can have multiple tests that can fail and need a common handler when they do it isn’t stupid and in some ways makes the code more readable than some ways used to avoid it such as try/catch syntax