The internet of things (IoT) is an umbrella term for all kinds of smart and connected technologies. It is often most associated with smart thermostats and connected fridges in people’s homes but also covers many areas of disruptive innovation across industries and the public sector. We are now seeing real world deployments and learning practical lessons from those projects. The legal, commercial, and liability questions they give rise to are complex and there is a lack of settled market practice but there are pointers from the various use cases around smart cities; factory and supply chain automation and control systems; ehealth, telehealth, and connected medical devices; autonomous vehicles; and the connected home. There are some common threads through IoT projects that can be considered, regardless of the particular industry or application involved.
Firstly, most IoT systems involve sensors of some kind gathering data about an environment. These could be cameras recording the movement of people through a public space or temperature gauges or the microphone on an Alexa speaker. The questions that arise on this part of the systems are often around privacy (compliance with data protection rules but also the parallel ethical questions about what is appropriate in terms of surveillance). However, there is often a need to embed small hardware components in environments - this has both a rollout and a maintenance cost. The sensors need to last a long time – both in terms of ruggedization and battery life. And, they are only useful when they are connected so the means and frequency of connection are important considerations.
Secondly, data inputs are processed to either provide a recommendation or data visualization to a human operator or, increasingly, using artificial intelligence or machine learning to generate an automated instruction. This is creating incredibly rich data pools that are valuable but for which there are limited regulatory frameworks that recognize ownership.
Processing and decision-making by AI systems raises further legal and ethical questions around who is responsible for incorrect decisions or a failure to act
This means “ownership” of data becomes a question of agreeing contractual controls and imbalances in bargaining power. This is addressed somewhat in the consumer environment by the General Data Protection Regulation, which applies from May 2018 but a lot of valuable IoT data is not personal data and operators of IoT systems need to be careful who can re-use their data and for what.
This processing usually occurs “in the cloud” as data is aggregated from various remote sensors and compute power and storage is required. Connectivity is again critical. Cloud and connectivity services have made this possible by bringing down prices but their contracts involve limited warranties, extensive exclusions of liability and a lack of legal or financial remedies in the event of performance or availability issues. There are developments that will move compute power back to the “edge” of the network – particularly where latency is a big issue (such as driverless cars) – but these will still likely be shared platforms (or be very expensive). Customers have to have robust contingency plans that provide short-term workarounds and longer-term plans to switch suppliers if cloud or connectivity is frequently unavailable or poorly performing.
Processing and decision-making by AI systems raises further legal and ethical questions around who is responsible for incorrect decisions or a failure to act. Is the bar higher, in terms of accepted failure rates, than it would be for a human? What does this mean for businesses adopting these technologies? What controls and oversights need to be in place over AI systems in the particular industry in which you operate? Our governing authorities, both nationally and internationally, are starting to wrestle with these questions whilst the market steams ahead in deploying such technology.
Finally, most IoT systems are not just about recording data but result in a real world effect – a system turning on or off, a vehicle turning left or right, a delivery route being sent to a driver. Some of these effects are still achieved through a human instruction from a smartphone or tablet or remote control centre. Others are fully automated. It is in these effects that IoT presents the greatest physical risk – an incorrect instruction or a failure to receive an instruction could lead to physical damage to property, injury to people or significant additional business costs. This gives rise to extensive discussions around liability through both the allocation of contractual risk and the role of insurance. Furthermore, these are by their nature not static systems – which are responsible for providing data or software updates, and what happens if a user fails to install an update? Should updates be mandated so an autonomous car cannot be activated before its software is up-to-date?
There are not necessarily straight-forward answers to the questions posed by IoT projects and each project will also have its own unique challenges and industry requirements. However, there is increasingly a framework or approach for working through these questions based on learning from other projects— not necessarily in your own field—as more industries are impacted by the convergence of connectivity in all sorts of areas. This learning will continue to grow and anyone considering an IoT project should be careful to make use of it.