Friday, February 2, 2018

Mental Models & Security: Thinking Like a Hacker

Mental Models & Security: Thinking Like a Hacker


In the world of information security, people are often told to “think like a hacker,” which inevitably reminds me of Sylvester Stallone muttering his line in Demolition Man -- “Send a maniac to catch a maniac”.
While such words of wisdom work great for movies, they tend not to be very helpful for those trying to understand it.
If you think of a hacker with a very narrow definition (e.g. someone that only breaks web applications), it leads to a counterproductive way of thinking and conducting business.
A little knowledge is a dangerous thing, not least because isolated facts don’t stand on their own very well. As legendary investor Charlie Munger once said:
“Well, the first rule is that you can't really know anything if you just remember isolated facts and try and bang 'em back. If the facts don't hang together on a latticework of theory, you don't have them in a usable form.
You've got to have models in your head. And you've got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You've got to hang experience on a latticework of models in your head.
What are the models? Well, the first rule is that you've got to have multiple models because if you just have one or two that you're using, the nature of human psychology is such that you'll torture reality so that it fits your models, or at least you'll think it does. …”
For security pros, it’s worth bearing this in mind. Multiple mental models from different disciplines are needed to make good and informed decisions.
When we look at the thought process of a (competent) security professional, it encompasses many mental models. These don’t relate exclusively to hacking or wider technology, but instead cover principles that have broader applications.
Let’s look at some general mental models and their security applications:

1. Inversion


Difficult problems are best solved when they are worked backwards. Researchers are great at inverting systems and technologies to illustrate what the system architect would have rather avoided. In other words, it’s not just enough to think about all the things that can be done to secure a system, but to think about all the things that would leave a system insecure.
From a defensive point of view, it means not just thinking about how to achieve success, but also how failure would be managed.

2. Confirmation Bias

What someone wishes, they also believe. We see confirmation bias deeply-rooted in applications, systems, and even entire businesses.
It means that two people with opposing views on a topic can see the same evidence and come away feeling validated by it. It’s why two auditors can assess the same system and arrive at vastly different conclusions as to its adequacy.
However, confirmation bias is extremely dangerous from a defenders’ perspective, and clouds judgement. This is something hackers take advantage of all the time.
People often fall for phishing emails because they believe they are too clever to fall for one, or too insignificant to be targeted. It’s only until it’s too late that reality sets in.

3. Circle of Competence

Most people have a thing they’re really, truly good at. But if you test them in something outside of this area, you’ll find they’re not particularly well-rounded. Worse, they may be ignorant of their own ignorance -- you probably know this as the Dunning-Kruger effect.
When we examine security as a discipline, we realize it’s not a single monolithic thing. It consists of countless areas of competence. A social engineer has a specific skillset that differs from a researcher with expertise in remotely gaining access to SCADA systems.
The number of tools in a tool belt aren’t important. What’s far more important is knowing the boundaries of one’s circle of competence.
Managers building security teams should evaluate the individuals in the team and build the circle of competence for the department. It can also help identify where gaps are that need to be filled.

4. Occam’s Razor

Occam’s razor, also known as the ‘law of parsimony,’ can be summarised as:
“Among competing hypotheses, the one with the fewest assumptions should be selected.”
In other words, it’s a principle of simplicity that is relevant to security on many levels. Often hackers will use simple, tried-and-tested methods to compromise a company or its systems. The infected USB drive in the parking lot. The perfectly crafted spear-phishing email that purports to be from the Finance Department.
While there are undoubtedly complex and advanced attack avenues, they are unlikely to be used against the majority of companies.
By using Occam’s razor, attackers can often compromise targets faster and at a lower cost.
The same principles can be applied when securing organisations. It is also worth bearing in mind Einstein’s quote that said an “idea should be made as simple as possible, but no simpler.”

5. Second-Order Thinking

Second-order thinking means to consider that effects have effects. In other words, it forces you to think long-term when considering what action to take.
The question to ask yourself is, “if I do X, what will happen after that?”
It’s easy in the security world to give first order advice. For example, keeping up to date with security patches is generally good advice. But without second-order thinking, it can lead to poor decisions with knock-on consequences. So, it’s vital that security professionals consider all implications before executing. For example, what impact will there be on downstream systems, if we patch or upgrade the OS on machine X.

6. Thought Experiments


A technique popularised by Einstein, the thought experiment is a way to logically carry out a test in one’s own head that would be very difficult or impossible to perform in real life.
In security, this is usually used as ‘table-top’ exercises, or when risk modelling. It can be extremely effective when used in conjunction with other mental models.
The purpose of a thought experiment isn’t necessarily to reach a definitive conclusion, but to encourage challenging thoughts, speculation, and push people outside of their comfort zone.

7. Probabilistic Thinking (Bayesian Updating)

The world is dominated by probabilistic outcomes, as distinguished from deterministic ones. Although we cannot predict the future with great certainty, we subconsciously make decisions based on probabilities all the time.
For example, when crossing the road, we believe there’s a low risk of being hit by a car. Though the risk does exist, you’ve looked for traffic, and are confident you can cross safely.
The Bayesian method is a method of thought where one considers all prior relevant probabilities, and then incrementally updates them as newer information arrives.
This method is especially productive given the fundamentally non-deterministic world we experience: we must use prior odds and new information in combination to arrive at our best decisions.

Conclusion

Thinking is a skill. And it’s one that we develop as technologists. The first thing first-year computer science students are taught is the importance of breaking broader tasks into small steps. Newbie security professionals are taught the importance of curiosity, paranoia, and to look for the mistakes made by others.
While there may not be a simple answer to what it means to ‘think like a hacker’, the use of mental models to build frameworks of thought can help to avoid the pitfalls associated with approaching every problem from the same angle.
I’ve listed seven different mental models here, some which you may already be familiar with, and others you could try out. Please share any of your favourite security and hacker mental models and problem solving techniques.  

No comments:

Post a Comment

Thank you for your comment. Will try to react as soon as possible.

Regards,

Networ King