What is zero trust and how can it protect your business
Over the last few years, “what is zero trust?” and why it matters for businesses has become a hot topic of discussion. This is largely because zero trust proposes, not just small changes, but a complete overhaul of traditional ways of thinking about cyber security.
What is zero trust security?
Zero trust is a concept, rather than a single action performed by a business, and is an approach that is designed to address technology conventions that have dated poorly and are no longer suitable for handling modern threats.
Here are some things that zero trust is and is not.
- Zero trust can be thought of as an architectural posture
- It is as a new set of principles that can only be met through modernisation and adaptation
- A zero trust journey typically needs to start with security fundamentals and basic security ‘hygiene’
The aim of zero trust is to weed out anti-practices, such as implicitly trusting an internal network. Simultaneously, it aims to bring granular security controls to networks and other entities that have historically offered pervasive, timeless access once some initial requirements have been satisfied.
Although zero trust solutions are ubiquitous, no one alone will solve all of the challenges faced by today’s businesses. Nevertheless, some of the technologies your business uses or is considering subscribing to may have a role to play in moving your organisation towards a zero trust position.
Purchasing new technologies will not make an organisation zero trust. But it is often impossible to reach a desired zero trust state without investing in modern technologies.
The critical role of cyber security basics
It is true that the topic of zero trust has been focal to defensive thinking and every vendor is pushing organisations to undertake zero trust work or procurement. However, the reality is that most organisations have not been ready to seriously consider zero trust adaptation or whether their methods of managing permissions are suitable. And there are several reasons – and solutions – for this.
Many businesses still struggle with basic security hygiene and most have wide-open internal networks. These are some of the most basic tenets of zero trust thinking. Yet there is typically a chasm between IT operational objectives, or service contracts, and where we need to arrive on a zero trust journey. However, it does feel as though the new norm of hybrid working has motivated change towards a zero trust position at a rate that would have been difficult to forecast previously.
How to implement zero trust
Historically, the most common way of managing network security was to invest in perimeter defences. This is the firewall, which controls in and outbound network traffic at a fairly coarse level.
It is the reverse proxy, which adds authentication strength as users outside your network access web applications hosted on your network. It is the forward proxy, which blocks explicitly defined sites, or restricts access only to known good sites, helping to prevent drive-by malware attacks. It is VDI, VPNs, and more. It was typically all your eggs in this basket. We made it hard to get inside, but if you did get in, there probably wasn’t much to stand in your way from establishing a foothold.
However, a few things have changed that now make these defences inadequate in isolation. They can still be important layers in a defence in depth strategy. But here are some of the reasons we now know they aren’t enough on their own.
Ever-evolving cyber criminal tactics
Phishing attacks expose the fundamental problem with putting too much faith in perimeter defences. Email is so widely used for communication with external parties that it is impractical to consider restricting who is allowed to send messages to employees. Phishing attacks can be very clever and have a high success rate. It only takes one successful attack in a campaign to welcome the attacker inside the organisation.
By allowing the message into the network, we have already destroyed the premise that the perimeter was fully trustworthy. And while phishing defences have certainly improved considerably over time, this is quite far from being a solved problem.
Once an attacker can persuade a user to install something, or sign on to a disguised logon page, we need to start thinking about what that attacker might do from that initial foothold. Where we previously trusted the perimeter, and trusted a device inside the network, we now need to be more suspicious.
Resources are often off-premises
Many (if not most) resources have moved outside the perimeter. Although it is possible to require users to sign on via your premises before accessing something outside the perimeter, this approach has significant operational disadvantages. It is also true that there are typically exceptions to this on-premises sign-on requirement which represent an open risk. For instance, it is typically possible to create accounts within cloud services that will bypass at least some of these controls.
Users are a better target than technology
As defences have evolved, attackers have increasingly gravitated towards social engineering as a viable means of stealing credentials and attacking users rather than technology. To illustrate, multi-factor authentication (MFA) is increasingly becoming the norm, but attackers know they can fatigue many users into approving MFA app notifications if they are persistent enough. Users are ultimately in charge of their own privileges, and these are often enough for an attacker to move to a more damaging position.
The benefits of re-thinking practices
All of this is to say that trusting the internal network can no longer seen to be a good strategy. We also see lots of other common practices that introduce significant, open risks.
Allowing unmanaged devices (typically BYOD) to access cloud resources or internal networks was once common, and may still be allowed as an open risk. Although mobile application management provides a good compromise on mobile platforms, we don’t yet have good solutions for BYOD on Windows, macOS and Linux. Despite this gap, many organisations allow this access, and those devices could be in any state (infected, vulnerable, outdated or fully compromised).
On fully managed devices we often find that a basic level of control is intended, but there is little in place to verify the state of the device, particularly where users have privileges to install their own applications.
This may result in bad drivers (this a particularly timely concern with some recent WHQL issues coming to light), malicious apps or vulnerable apps. We also know that for many organisations using group policy to manage desktop configuration, there isn’t a mechanism to verify that policies have applied successfully. Or perhaps they were applied successfully, and the user has removed some of the controls.
We know that many organisations continue to enforce dated password requirements, such as aggressive rotation policies, and that there is an abundance of evidence that this encourages password anti-practices, such as post-it notes, sharing a password with a personal account, etc. This still happens regularly, yet organisations remain attached to the outdated approach.
There are many other known risks that organisations fail to mitigate effectively, such as:
- Hash/token theft detection technologies, or techniques are not in place to spot these abuses
- Mitigations to address direct memory access attacks against Firewire or Thunderbolt ports are not implemented
- Untrusted wireless or wired network access, such as open Wi-Fi, untrusted Wi-Fi, captive portal interception and network deception risks are not addressed because the cost to business agility is seen to be too high
These open wounds are well understood and frequently exploited by attackers. In many cases the risk is accepted, without having any visibility of exposure. These and other common practices make the case for a strategic shift to a posture that can address many of these risks at once.