I’ve often noticed that it can be difficult to find information in one place around PKI solutions and what makes them secure.
That’s why I’ve decided to create a PKI resource myself! This ongoing series will outline the elements that make up a secure PKI. This week I’m talking about physical access.
How long is a piece of string? Hmm, difficult if not impossible to answer.
The same can be said about how far we implement the physical security of our servers in a datacentre. The simple answer is to go as far as we can with the technical limits of what we can do and additionally what won’t make the management and administration of those servers overly-cumbersome when maintenance is required in the future. This becomes more complex when dealing with virtual servers, as how do we physically restrict access to those machines?
Let’s take look at the root CA for example, which is an important component within the environment – it would be devastating if this were to be compromised!
Microsoft best practice tells us we should be using at least the two or three tier PKI infrastructure that will give us the ability to turn off and isolate the Root CA from attacks; because if it’s off you can’t compromise it, right? What about the deallocated VM and disks, backups, Private Keys and so on…?
If we were to deploy the root CA we could switch off the computer, remove the cabling from it and lock it away in a cage that only authorised personnel have access to. We could keep any backups separate from everything else and in secure physical locations. Oh, and require biometric entry to those resources.
How Much is Too Much?
It would be fair to assume that most of us don’t deploy a modern server application infrastructure on physical computers nowadays except where absolutely required, and will likely deploy within a virtualised environment. So, if we had the luxury of a dedicated host we could shut that down and isolate it, back up the disks and store securely as well. Hmm… seems like overkill for most organisations!
As with the two and three tier deployment models it seems more sensible for organisations to secure and use strict RBAC policies to restrict access to the virtual machine, as well as the underlying guest operating system. In terms of the backups again, it may well be prudent to ensure that only authorised personnel are able to access, backup and perform restore operations on those sensitive servers and resources. A further step would be to securely remove the VMs hard disks and store separately from the VM so it is absolutely unable to be started either inadvertently or maliciously, without secondary access to the underlying resources it requires.
So, what about the issuing and other servers within the infrastructure? It seems sensible to adopt the same posture by ensuring that access to the actual Virtual Machine is as restricted as the RBAC permissions to access the operating system, with a focus on who has access to the local machine as administrator as well as domain-based role rights within PKI.
Right that’s the physical side sorted out… but then is the network they sit on secure? Only have the ports that are required open on any security boundaries, firewalls on the operating system or perimeter networks fit for purpose, unnecessary protocols on the NICs are disabled, unnecessary services disabled… the list goes on!
The More the Merrier!
In summary, the more you do to make things secure, the better. Keeping in line with a few common best practises and going as far as the organisation is able or willing to support makes the infrastructure secure and able to manage effectively when required.
Rest of the Series
Here’s the series in full – I’ll be updating here each week as each part is released:
If you have any questions on what I’ve discussed here or security in general, feel free to email in on email@example.com and I’ll be happy to answer any queries you have.