The Internet of Things is big. No, really big. No, even bigger than that. How big? My colleague Brad Boyer explains it better than I could (and handily defines the thing part of IoT, too). Go read it - I’ll wait.
Security is hard. No, really hard. Ok, maybe not quite as hard as that. But it is easy to get wrong. Easy enough that OWASP publishes the Top 10 Most Critical Security Risks every few years, along with how to mitigate those risks. Yet those same vulnerabilities still appear in the wild. It is why we have sites like have i been pwned?.
The S in IoT stands for security
Many of the current-day IoT devices are only recently Internet-enabled. In some cases, they’re only recently even digital. This is a radical shift in capabilities and platforms which results in new attack vectors that we are still trying to understand. We need to explore the implications of new devices with new capabilities in new contexts.
A perfect example is the prevalence of accelerometers. They are in nearly every smart phone out there, as well as other devices people keep on their person like pedometers. However, many people do not know they are a component in their devices, and many of those devices trust the accelerometer data too much. Who would think of an accelerometer as an attack vector? These researchers, for one. They came up with a series of acoustic attacks that can inject false data.
Alexa is the voice service that powers the Amazon Echo. This allows customers to use their voice to interact with devices. It is a great capability to have around the house, but its risk profile has to be carefully considered. We all know someone that enjoys pranks (there is a relevant XKCD for everything!) But I bet most customers never considered what could happen if a newscaster accidentally used the magic words on TV.
Speaking of Alexa (see what I did there?), people are now realizing that in order to react to voice commands, Alexa must listen to (but not necessarily record) everything that it can hear. Some people are comfortable with that and some are not, but it is something about which all customers need to be aware.
Once it detects the wake word, according to Amazon, the Echo starts streaming audio to the cloud, where it is secured until the customer permanently deletes it.
Detectives requested access to the audio data from an Echo in a murder case. What is the “reasonable expectation of privacy” when dealing with a device that is always listening? What happens if police misunderstand how the technology works?
Let us look at a different data point: your heart rate. That can tell you a lot about someone. How active they are, and when. When they are asleep. If they are ill or have a chronic medical condition. Pacemakers are an obvious collector of that data. Indeed, in at least one case that data was used to prove arson and insurance fraud. Now there are more and more devices that a user might not expect to record their heart rate (for example, pedometers). One couple found out they were pregnant via their FitBit data.
Imagine what would happen if the data store were breached. Building profiles of consumers is nothing new, but now there are more ways to gather more data.
Samsung’s new smart fridge can report its contents. What is the risk profile of that data being leaked? If your fridge is empty, maybe you’re on vacation. The type of food you eat affects your health. The cost of the food you eat gives a glimpse into your finances. This information could be valuable to a malicious actor. A smart thermostat could have similar risks.
Children’s toys are now joining the IoT, which brings up new privacy concerns. Germany has already banned one such toy, and security researchers are concerned about another. When sensitive data is held by a third party, consumers need to consider the worst case. What happens when kids’ voice messages are leaked? Won’t somebody please think of the children?
IoT devices are marketed at a very broad audience, meaning consumers are less likely to be technologically adept. That makes secure default settings critically important, which many devices do not have. Even worse, many do not make it easy to change those defaults even if the consumer is aware of the need.
Manufacturers commonly load devices with a default username and password, which is easily looked up. However, some devices take this a step further and have hardcoded credentials which the user cannot change. Typically these hardcoded credentials are not even mentioned to the user, leaving them unaware of a large hole in their security profile. One of the more recent and nefarious malware packages, Mirai, scans for IoT devices that have known credentials.
In many cases a single manufacturer produces many devices with different brands (this is known as a white label product). The wide-spread nature of this business practice leads to what is known as a class break, where one smart person can come up with one clever hack that breaks an entire class of systems. In fact, that is exactly what happened when the Mirai author released their source code. That malware is not even very advanced, but it nonetheless effective. If a vulnerability is found in one device, many other brands and models may also be affected. Discovering which devices are the same, however, is something the vendors do not make easy.
Windows users have only recently gotten used to regularly updating their computers, and Microsoft has been working on that for many years. How many people think “it’s Tuesday, time to check if my refrigerator needs any software updates”? Many IoT devices are not even designed to be updated. This means once a vulnerability is discovered, it is only a matter of time until that device is compromised. Even if the device is factory reset, without an update it will be breached again. Even for some of the devices that support updates, it is not always easy. Combine that with the fact that some of the devices have an expected lifespan of 10 or more years and you can see how big of a problem this is.
I have described above what could happen in the event of a breach. Some sensitive data could be leaked with some serious consequences. But there are more reasons than that.
One easy answer is that a vulnerable IoT device is likely the weakest link in a security strategy. A compromised IoT device has nearly the same threat profile as any other compromised device on a network.
What if “we do not have anything of value on our [network/site/etc.]”? You still have your reputation to think of.
Malicious actors are now using some compromised devices to anonymize their activity. The device is turned into a proxy through which the hacking, spamming, DDoS, or other attacks are launched. This makes it harder to catch the criminals, and introduces the possibility of innocent people being caught up in the investigation.
Unfortunately, with the proliferation of vulnerable devices and malware that can compromise them, criminals are building massive botnets. They use these botnets to extort money, censor opponents, or whatever they can make money doing. Even an empty threat is enough, sometimes. And these attacks are only growing in scale as time goes on.
Liability is murky in many of these cases, until it is tested in the courts, and there is a lot at stake. Take cars, for example. On the tame end of the vulnerability spectrum is being able to get access to sensitive data on the car and its owner (still a big deal!). On the scarier side is the risk that hackers could exert more control over a big packet of kinetic energy with squishy humans inside.
Thankfully, Bruce Schneier has a great explanation:
Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices do not care. Their devices were cheap to buy, they still work, and they do not even know Brian. The sellers of those devices do not care: they are now selling newer and better models, and the original buyers only cared about price and features. Insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.
This is similar to how credit card fraud primarily affects merchants and banks, not the end user.
Criminals sometimes find it more lucrative to keep their presence hidden and rent out the compromised device.
These are the growing pains we have seen countless times before (and will again in the future) across various industries. Devices are being connected in new and novel ways. The “arms race” between attack and defense is changing very rapidly. New techniques are being developed on both sides.
However, now the number of devices involved is much larger, with a correspondingly larger impact if things go wrong. Why is that?
In short, security is not often made a priority or given enough attention (either time or money).
Won’t the market self-correct? Not with the current economic incentives. Neither the manufacturer nor consumer have a reason to prioritize security because the cost of failure is externalized. Even if the consumer were security conscious, there is too much information asymmetry between consumers and manufacturers. Consumers cannot easily find most of the information they would need to truly evaluate competing products.
Probably the most important things is to carefully separate truth from fearmongering. “Hotel ransomed by hackers as guests locked out of rooms” makes for a sensational headline - it is sure to get a bunch of clicks. But it wasn’t true and only added to the fear, uncertainty, and doubt surrounding the issue. This is how we make sure we are solving the right problems, and not protecting against unrealistic movie plot threats.
We need to change the economic incentives described above. To align manufacturers and consumers so security is at least considered. It is even better if consumers can be aware of the risks that the IoT introduces and deliberate about their use of these devices. However, I am not optimistic about this. Historically, the market does not address externalized problems well.
That is where government regulation comes in. No one likes regulation for its own sake. No one wants the government involved unless it is necessary. But regulation works for pollution and other externalized problems (you’ve probably heard of the tragedy of the commons), and it could work here too.
At the same time, we need to find a way to make security easier to get right. For software and hardware development, industry-supported guidelines would go a long way to setting a baseline expectation. In addition to the purely technical concerns, we need guidelines for design and user experience. Mika Stahlberg rightfully points out that “ease-of-use, especially during set up, is critical for these kinds of products”. Making security features more accessible to the consumer would go a long way towards improving security.