In my previous blog post on zero trust, I reviewed some reasons why common practices leave businesses vulnerable to the tactics of cyber criminals. Much of this reduces to how we place trust today. We:
Trust the network because it’s only accessible after passing through physical or perimeter access controls. Even though we know they rarely prevent a determined attacker from finding their way on-premises.
Trust the domain because it authenticates us initially, even though we know that theft of domain authentication hashes and tokens is trivial once a user is compromised.
Trust the MFA provider to save the day by strengthening passwords, while forgetting that after authentication we are nearly always left with a token that can be stolen.
Trust the configuration management technology to deploy up-to-date software and configuration policies, even though we know that an attacker can deploy anything they like to anything it can reach if this infrastructure is compromised. We even have an agent installed everywhere to make it simpler and harder to detect!
Trust the software vendor to vet the security of their product before signing it, and to ensure their code has not been tampered with before the customer uses it.
Trust the security vendor to deploy malware definitions, even though we know many endpoint security technologies have introduced their own weaknesses.
Trust the administrator to perform their duties and not abuse the privileged access they have been granted.
Trust the helpdesk not to relax a hardened security position in the interest of resolving a ticket.
Trust the developer to follow good practices, to only deploy what they should, to build on trustworthy dependencies, and to use their privileges only within the intended remit.
Trust the power user to grant the right level of access to the right people when they manage their own data, even though we know this power user will rarely revoke access of their own volition.
Trust the user not to install unsanctioned or malicious software, not to walk away with corporate data as they leave the organisation and not to accidentally lose corporate data on paper, laptops or phones.
Why do you need zero trust?
The list goes on. In some of these examples, your first thought might be that your business already has controls in place to address those risks. These would be examples of drawing explicit boundaries around what a trusted user is allowed to do. Or perhaps an example of a “least privileged” configuration, where an entity has only the permissions it strictly requires to fulfil its role in a given context.
This is a critical facet of how we contain risk and conceive of trust. But ultimately, we still rely on trust within the remaining, smaller scope of allowed privileges.
Security has always been founded in trust. You purchase a lock for your door, and you trust that all locks of this type require a different key. Or you trust that the locksmith won’t return and let themselves in while you’re away.
You hire a security guard and you trust that your vetting process is strong enough that they won’t be bribed. But even in this world of physical security, we can see that trust is insufficient in isolation.
We need layers of defences, resilience and capabilities that can handle an attacker’s techniques. Consider how many layers of defence and offence play out in a heist film. The same is happening in technology today.
How the zero trust model moves beyond trust
A zero trust model does not invalidate trust as a security concept. It builds upon the concept of a trusted entity (who, or what should be allowed to do what) with new verification requirements.
It seeks to weed out implicit trust, such as the type of promiscuous access that is common inside corporate networks today. As an alternative, we must allow these users and devices to be able to access these servers, but these other users and devices must not. This is the concept of network segmentation.
Many of the approaches put forward by a zero trust architecture are not new. Network segmentation is a fundamental, rather than a new concern. What must change is the degree to which network segmentation must be taken seriously, or the grain at which it is conceived (what we now call “micro-segmentation”).
A large part of a zero trust architecture adds emphasis to fundamentals like segmentation and least-privileged access that have become less focal as organisations have drifted towards buying security rather than designing for it.
The other major part of zero trust architecture involves technology modernisation, which can allow us to establish control and verification in ways that have been difficult in the past.
For example, most authentication approaches involve some form of proving identity to an authority and this authority returns a token that can be accepted by services that also trust this authority. This is how Kerberos, SAML and OAuth work.
In all these cases there is a concern about how long that token lives. Although OAuth 2.0 and Open ID Connect have changed some of these concerns, there is still a window where a stolen token can be used.
With some new technologies, we can verify the token and the context around the token. For instance, with the forthcoming Continuous Access Evaluation capability for Azure AD Conditional Access, we can adapt in near real time to changes in location, employment status, or the binding between a token and the device where it was issued. This may as well be called “continuous verification”. We now aspire to this level of verification where possible.
Join the zero trust conversation
Get your questions about zero trust and other cyber security issues answered by our specialists – get in touch now to start the conversation.