It's been an interesting week in information security. It's always an interesting week, however this time, the indisputable headline is the Equifax data breach. I won't talk about it specifically in this post, other than to refer to it briefly later on. In any event, people far more knowledgeable than me on the matter are saying plenty enough.
When I unofficially started out in web application security (circa late 2013), I had some interest in the subject, a basic understanding of it and why it was important. Working with Carl (see Honourable mentions), we decided that what the firm needed was a vulnerability scanning tool. We didn't have someone to own it, run it, decipher the reporting it provided or deliver improvement recommendations to developers or sysadmins, yet we definitely needed a tool.
Somehow, we managed to secure the budget for such a tool and we plumped for Netsparker. We looked at other vendors, PortSwigger being one, but as our predominant technology choice was Microsoft back then, Netsparker seemed to play more nicely with our stack than anything else on offer at the time.
So, we had our license and a machine to run it on. Brilliant. Now what?
Well, you point it at a URL and off it goes. It crawls, it finds, it attacks, it confirms and then it reports. That's what it does. I'll write in more detail about Netsparker in a future post, but the bottom line is this; you give it a web application and it finds everything wrong with it, from a security point of view. It also does other cool things, but again, I'll go into more detail another time.
There have been many debates over the years about automated versus manual testing of software security. All I'll say is this; My money goes on a machine that does the spade work, leaving me to simply interpret the output and go about my day. It would take an army of pen testers days to perform the myriad tests that Netsparker completes in at most a few hours. Netsparker CEO Ferruh Mavituna talked about this in Paul Asadoorian's Security Weekly and it explains it very well.
Right, we have a tool and we have a variety of web applications. In 2013, our main area of interest or concern was our public facing stuff; customer portals, things with forms and APIs. If you recall my previous post, there were 27 of them at the time. Now we're pushing 100.
By running scans against these original 27 web applications (manually by the way, as in press 'Scan' and wait until complete), certain patterns began to reveal themselves. It was these:
Crap cookie security
Amongst other things, cookies are the passports for your travel around a website, often containing your session, or other identity related data. Should someone that isn't you obtain those cookies, they can hijack your session, i.e. impersonate you. This isn't good.
Iffy security header implementations
Security headers force most modern browsers to obey the rules of the website when you visit it, be it by protecting you from XSS attacks, through to ensuring your connection is fully encrypted at all times, when a successful TLS configuration is in place.
Dodgy versions of third party components
OK, so not all of the software in your web application is your own. It makes no sense to try solve a problem that someone else solved already. So, you consume frameworks, libraries and the like and make them do things that save developers time. The problem is, these components go stale through time and in many cases become vulnerable. With this comes the risk of software that you're not directly responsible for being a potential attack vector that could compromise you. This is what appears to have profoundly bitten Equifax.
If your servers support SSLv3, TLS1<1.2, RC4 cipher or MD5 hashing and so on and so forth, you're effectively allowing your clients to connect to your web application in an insecure way. You're allowing bad guys to potentially and easily decrypt traffic that may very well contain sensitive information. Not having HTTP Strict Transport Security (HSTS) supported is also a terrible place to be, if sensitive information is being shared. I'm no expert on cryptography, but these are the most common problems I've seen so far.
Risky information disclosure
Here's a plan; let's advertise to the internet what versions of our web application framework are in use, or web server, or even underpinning database platform. It's a risky plan, because you're telling an attacker to Google (other search engines are available) to look up known vulnerabilities in those things and then test for them in your web application. This information is leaked mostly via response headers that aren't properly obfuscated.
Dishonourable mentions :)
There are plenty of other basic problems I've discovered over my time in this game, including:
- Autocomplete on login or password reset forms, allowing browsers to cache credentials
- Content headers not being set, allowing browser sniffing to happen
- Framing headers not being set, allowing some bad guy to frame your site in theirs :O
- ViewState being used, but not encrypted, allowing an attacker to follow the process of your software
- Rubbish or absent custom error pages, allowing an attacker to find out stuff about your software you'd rather they didn't
And so on.
It's worth mentioning that these days I also do a lot of third party vendor software security assessment and see all of these problems manifest themselves regularly.
Enough of the problems!
How do you fix these things? Well happily, most of them are pretty simple to deal with. Headers are reasonably straightforward to implement in your application configuration (If you're an .Net or MVC shop, things like NWebsec even do the security ones for you!). Whatever your technology of choice, properly configuring response headers isn't difficult.
With TLS, you'll possibly need an Adrian (Honourable mentions). Someone that can get into your web server and ensure it gets an A+ rating on SSL Labs. It's not that hard at all; turn off support for shitty SSL/TLSv<1.2, old ciphers and hashing algorithms etc. As well as with security headers, Scott Helme also knows his TLS onions and is great with providing advice on both.
With out of date and potentially vulnerable third-party components, you need a dialogue between all parties; those architecting solutions, those developing them and those providing the hosting and operation of them, to ensure that everything is kept as up to date as possible. Again, not hard if there's the motivation to, for example, keep individuals' data secure and preserve their privacy.
After all, keeping someone's personal data secure is a responsiblity right? Yes, it is, both morally and in most countries legally. I'll discuss PCI-DSS and the EU GDPR another time.
Often, it's hard to get buy in or even a basic appreciation of why security is an important attribute of software quality, because it's historically not been seen as one. Even now, we see an appetite by businesses to deliver features above pretty much everything else. Features, features and more features. "Oh, it's broken, please fix it". That's common. "Oh, it's insecure, please secure it". Nah, you never hear that.
Security isn't (generally) seen as a first-class citizen of the overall package of a good software deliverable, which partly explains why dynamic application security testing tools such as Netsparker exist.
So, you have to raise awareness of the importance of secure software in different ways, with different people:
Developers live to solve problems through software and take much satisfaction in doing so. They mostly don't wilfully introduce problems by doing their jobs, but by not considering security they could be doing just that. My aim is to highlight this paradigm to them in practically those terms.
Architects write screenplays by which developers go and act, so by not considering security at design time, they're missing an opportunity to ensure that security is baked in during the development cycle.
Operations provide critical infrastructure and / or networking that allows software to function and be accessible, but often the demand is to provision quickly, rather than safely or responsibly. These people need a clear requirement and the head room to provide the right solution.
People in the business, well, they demand those features and that's OK. They need to also accept that if the emphasis is on expediting the product, then inevitably corners will be cut, compromises will be struck and the risk of their product being the route that an attacker steals information becomes a very real thing.
Managers don't need a deep understanding of these issues, but they need to trust people when they tell them of the importance of doing the right things, even if it introduces justifiable delays into delivery.
No one wants to be Equifax.
Because I know you need sleep.
Web application vulnerabilities don't begin and end with injection and suddenly, your data is leaked and / or gone, although yes, it's the OWASP Top 10 king pin for a reason.
The message I'm trying to get over in this post is that there are some very common different problems other than injection out there that could lead to some pretty disastrous outcomes and in fact most of them are easy to fix
The three key obstacles I've found are:
- Gaining appreciation of the importance of web application security from those that design or develop software, or provide supporting infrastructure and / or networking
- Describing the common problems I talk about above and explaining how to solve them in ways pallatable that make sense
- Getting commitment from the business and managers to make things better and actually implement improvements
I'm happy to report that while there's a way to go, I'm having significant success on all fronts.
Thanks again for your time.