|||

Video Transcript

X

Bug Bounties and Black Swans: How Heroku Expects the Unexpectable

There’s obviously more to security than humans, technology, and vendors with all of their implementations and expertise. At Heroku we believe that security is a byproduct of excellence in engineering.

All too often, software is written solely with the happy path in mind, and security assurances of that software has its own dangerous assumptions. A mature security program should challenge assumptions of security controls, move to testing continuously, and prepare for the unexpectable.

This means asking hard questions about the bigger picture. Think bigger than the development lifecycle, backing away from the fixations of confirming effective corrections and remediations. This means taking the time to imagine, to discover scenarios accounting for the unknown and unknowable.

Let’s explore a few concepts that prepared Heroku last year for the unknown, and the bug bounty researcher from Bugcrowd that helped us detect and implement a mitigation well before a patch was released. This is ultimately a harmony between threat modeling and partnership with the bounty-based research community.

On Threat Modeling

Let’s start with some important threat modeling concepts. Last year I had the privilege of teaching a threat modeling class at Kiwicon with my good friend Mark Piper of Insomnia Security in which we explored thought processes necessary to create a threat model addressing unknown gaps and issues. The following three are imperative to aid in future-proofing against vulnerabilities: challenging assumptions, attackers' budgets and bosses, and expecting the unexpected.

Challenge Your Assumptions

When designing and engineering software, there are certain security assumptions made regardless of what you are building. Whether that’s isolation of processes, segregation of roles, the basics of authentication, or use of well known encryption protocols, all of these provide their own form of security.

However, every “safe” security decision can also become an attack vector. In order to challenge your assumptions about what is safe by default, endeavor to perform automated testing for misconfigurations, confirm the design is adequate, seek external third-party assurance with security testing, and plan for failures.

Attackers Have Budgets and Bosses

Gone are the days in which the majority of those attacking can be described by the “curious hacker” category. Today, almost all attackers are financially motivated, regardless of the color of their hats: from the malicious (organised crime groups, nation states, corporate espionage) to the helpful (security researchers working with bug bounty programs, or trying to make a name for themselves in the industry), you should assume that they mean business.

Think about it like this: your attackers likely are as well-equipped as you are. They have bosses, budgets, roadmaps, and sometimes even project schedules, code repositories, and documented playbooks.

Expect the Unexpected

You may have performed some form of security review, created a threat model, and had testing performed to gauge the security of the software you are writing and implementing. You may even have found vulnerabilities that you had not anticipated that you have now remediated. There may be a digital pot of gold at the end of that rainbow.

But what about the things you can’t actually plan for? What happens if there is a flaw in the encryption protocol you’ve chosen to use? How do you handle the release of a new kernel vulnerability? How do you handle an issue that is only known to an attacker that has just discovered the exploitable issue and has decided you are a worthy target for its use?

The Black Swan

There is a concept known as a black swan event, in which no amount of planning or preparation can predict the unexpected. The history behind this concept is fascinating. While originating in 2nd Century Rome under the assumption that black swans did not exist, the phrase became commonly used in 16th century London to describe an unlikely or impossible occurrence. In 1697, a Dutch captain sailed to Australia and encountered a black swan for the first time, thereby establishing the irony that the phrase now captures.

In short, a black swan event is an event that has low predictability, is perceived as nearly zero likelihood, and yet has high consequences. In the information security realm, a prime example would be the use of a 0-day exploit. Since only the person or team that discovered it knows about it, there seems to be no way to accurately predict when or against what such an exploit may be utilised. As a result, mitigating against a black swan event means you must plan for resiliency for when the unexpected (inevitably) arises.

A Black Swan Story

One morning, an alert fired in the form of a direct page for the Data team here at Heroku. They have built instrumentation in the control plane for the Heroku Postgres servers to detect any form of privilege escalation to a superuser role. (note: we’ve stated before that security starts with excellence in engineering -- this is a perfect example, this type of virtuosity in managing the data fleet was done by the engineering team at their own volition.)

Following protocol, the Data team looped in our security team, had the instance isolated within 17 minutes of the privilege escalation, the user locked down, and they aided us in investigating what else the user may have been doing.

In investigating the issue, we determined that the user was very likely a security researcher, not a malicious actor, and via the email address he had used to sign up for his Heroku account, we were able to contact him and ask if he had been performing some action that might have resulted in the elevation of privilege we had seen.

The researcher, Andrew Krasichkov, aka buglloc, replied quite quickly, confirming that he was in fact performing research utilising our platform that resulted in this escalation, in the form of a Postgres vulnerability in the dblink and postgres_fdw extensions, and was pleased and a little shocked at the speed that we had been able to identify and respond to this type of attack. Being part of our bug bounty program through Bugcrowd, we of course rewarded Andrew and continue to work with him.

Bug Bounties

Because we encourage security research against our platform, combined with the fact that we were able to detect and respond in this manner, we were able to work directly with Andrew to understand his finding. In that way, we were able to proactively defend against this previously unknown flaw even before a patch was available.

Having this kind of relationship with researchers is exactly why we love bug bounties. It’s also a way into our industry for many new researchers: if they show the drive and desire as well as the basic knowledge required to continue digging, we work with them to reward their discovery of high quality issues, brought to our attention in a responsible way. With a bug bounty, they get to be their own boss, and can continue doing this work and improving at it. It's a fantastic supplement to the work we perform and our third party security assessments.

Without this program, we’d have much more work challenging our assumptions safely – in this case, an assumption that a user won't be able to escalate to a superuser role. Instead, this black swan event was one we were able to deal with quickly and effectively, and it’s a huge part of why we love the research community that we interface with through Bugcrowd.

Originally published: March 26, 2019

Browse the archives for engineering or all blogs Subscribe to the RSS feed for engineering or all blogs.