Once a concept reserved to the realms of science-fiction, it would appear that autonomous driving is fast becoming a reality. In the last decade, major players in the auto engineering industry have been quietly plotting their own strategies for self-driving cars and talk of a new era of highly automated driving has dominated the headlines.

A quick look into predictions of the arrival of autonomous vehicles on our roads reveals promise, with recent forecasts estimating the autonomous vehicle (AV) industry to be worth a staggering $7 trillion by 2050. Even by 2020, Tesla’s chief, Elon Musk, has optimistically claimed that self-driving technology will likely be safer than human intervention in cars.

However, a future in which self-driving is as common a feature as cruise control cannot come to pass until autonomous vehicles are regulated – and regulating such game-changing technology is, understandably, proving a challenge. After all, the question as to who is at fault from an accident involving a self-driving car is still yet to be answered. Without regulation, there are no real guidelines for how possible cases might be decided, and, given the technology is so new, there are few existing cases to draw upon.

In order to build a self-driving car that is both safe and useful to society, carmakers lean on legislation to set the parameters for how their system should operate. Every time an autonomous vehicle makes a decision, such as how fast to drive or when to switch lanes, it has to determine whether to compromise usefulness for safety and vice versa – so, for instance, driving very slowly may be deemed as highly safe, but society doesn’t want vehicles on the road that drive at such a slow pace. In this case, an autonomous vehicle must decide how much it can compromise safety to be deemed useful by society.

It may seem like driverless cars are just around the corner, but the truth is that an autonomous system that makes reasonable decisions can only be built once what constitutes as ‘reasonable’ is defined and formalised by regulators.

Only once this has been achieved can carmakers create the algorithms that set the boundaries for behaviour and program their vehicles to only operate within these boundaries. This would also establish a legal framework to evaluate fault when an autonomous vehicle is involved in an accident: if the decision-making system did not stay within the boundaries defined by regulation, then the company who programmed the system would be liable.

Contact Us

X