Who is hacking your pacemaker? Or your brakes?
Security and safety is important for all embedded systems, not just those which deal with money or national security.
This is why you’ll be seeing more stories like the one of the researchers who hacked an implantable defibrillator (ICD) and demonstrated that you could kill the wearer remotely (although only from a short distance).
What’s really behind this is that embedded systems are changing from encapsulated closed devices to networked (and frequently open) systems. And as soon as a system has any form of wireless connectivity, it is subject to new classes of attacks, like it or not. You’ll see the same thing happening with cars, aeroplanes, home entertainment systems, you name it. As soon as they get networked, crackers will get in there.
The problem is that most of these devices aren’t designed with security in mind. In many cases a networked system starts off as the new model of a non-networked one. And, as you can imagine, the internal software architecture doesn’t change much. After all, it’s just a maintenance interface (which over time becomes a convenience function etc). This is apparently what happened to the ICD. And the designers made it easy for the black hats by using a completely insecure communication protocol.
The reality is that most embedded systems these days hold assets that must be protected (just think about it: which of the devices you own does not contain data that would help an idendity thief?) or can cause damage if they misbehave. This is just another way of saying what I said above: security and safety are relevant for all embedded systems.
And security and safety aren’t something to add later. If it isn’t designed into the system, it’s virtually impossible to achieve. This must be recognised in building embedded systems: they need to be designed for security and safety.
How do you do this? By a defensive structure, where faults in one component are prevented from propagating into other components. And by clearly identifying all security assets, and putting in place the means to protect them. First of all this means to apply the principle of least authority (POLA): A component that has no need to access certain data should not be able to. And components on which security or safety is dependent must be minimised and protected: the system needs a minimal trusted computing base (TCB). And a security policy must be in place that controls where information can and cannot get to (and which cannot be circumvented). And the TCB must actually be trustworthy.
This can only be done by structuring a system into small and mutually protected components, which communicate only via well-defined channels, subject to the system’s security policy. And that policy must be implemented by a minimal TCB.
The way to achieve this is by basing your system on secure high-performance microkernel. Secure embedded systems need microkernels. And there’s no better one than OKL4.