- The Trolley Problem
- Driverless cars will blind us all with laser beams!
- Cars will be easy to hack
- We’ll all lose our jobs…
Turn left and mow down a group of children, or turn right and hit a tree, killing the vehicle’s occupants.
The modern version of the trolley problem, created as a thought experiment, has often been applied to autonomous vehicles as a warning on the complexity of decisions they will supposedly have to make.
In a nut-shell, this is a daft problem which provides a philosophical approach to an ethical dilemma without taking into consideration very important and fairly simple facts.
- Cars are already engineered to protect their occupants far better than they are engineered to protect pedestrians or other road users. The safest car on the road (the Volvo XC90), measured by Euro NCAP, has a 98% safety rating for occupants and a frankly appalling ~75% rating for protecting pedestrians. In fact, not one car goes above 85% for pedestrians.
- Given a complex situation, a driverless car will just come safely to a stop. Waymo testers are well known for their ingenuity and creativity in trying to defeat their own cars’ ability to perceive their environments, and when faced with the ‘impossible to plan for’… In one case coming across a crowd of people dressed as frogs hopping across a road, Waymo had programmed the vehicle to just gently brake until the route was clear.
- Nobody will ever get into a vehicle that favours the safety of someone else over you. Credit to writer and commentator Alex Roy for that, but it’s true… unless you’re getting into a vehicle with the express purpose of never getting off, like the Suicide Roller Coaster. Rather than considering a vehicle that’s safe for everyone, we are not generous enough to think about buying a vehicle that’s not safe for us and only safe for people around us. After all, we want to survive the journey to McDonald’s.
Of course, the nature of psychologists is to disagree, and the nature of philosophers is to question the meaning of everything. Am I right, wrong, is that a question or a lump of cheese? What does it all mean?
Wrong, wrong and wrong! Actually, wrong by a factor of 2 million, and I’ll explain why.
For this, we can fall back on technology facts and the laws of physics, which thankfully are not subject to votes, gerrymandering, or sinister algorithmic fake-news bots controlled by scary government forces.
Any device using a laser must be allocated to Classes, with 1 at the bottom and 4 at the top. Lasers you find in CD players and Laser Copiers… and Lidar units, are all Class 1. That’s the power or frequency at which there is ‘no possibility of eye damage’, i.e. there have never been any recorded cases of damage, even under extended exposure.
But that doesn’t tell us why.. or indeed what could happen if another Class of laser were used.
To be effective, a Lidar unit (Light Distance and Ranging) emits a tiny bit of energy on the infrared part of the electromagnetic spectrum. This energy is focused into a tight beam (i.e. turned into a laser) and then steered around a scene, by a mirror, a motor, or by a tiny device which combines those two things, called a MEMS. The energy, in the form of photons, travels out from the Lidar, hits and object and then some of those photons bounce back. The unit (and associated processing and software) then measure the timing and location of those bounces, and similar to your eyes, builds a rough 3D picture.
While Lidar is a fairly mature technology, having been theorised first in the 1930s and used in the Moon landings in the 1960s, it’s going through a few big changes. Arguably the two most important being the idea to make them move from a single mount (most commonly spinning lidars, as seen regularly on autonomous R&D vehicles) and the second, happening right now, the move away from this and towards ‘solid state lidars’, where a wide field of view can be taken in by multiple smaller, cheaper sensors, rather than several big, complicated expensive ones.
Either way, the maths we do is roughly the same, so we’ll use the example of the well known ‘Velodyne Puck’ which is widely used, and Velodyne has usefully published detailed specifications for.
Each sensor has between 16 and 128 lasers, which rapidly rotate at approximately 10 hertz, or 10 full revolutions per second, creating about 30,000 beam points.
Each laser pulses at a wavelength of 905nm (that’s just below 1/1000th of a millimetre) with power of 1 or 2 mW. That’s roughly the same energy needed to lift up a single grain of rice.
For comparison purposes, this is about 1/5,000th or 0.02% of the power output in your standard 10-watt LED headlamp bulb on a low-beam setting.
Any single laser beam would sweep across an inadvertently glancing eye in approximately 1 millisecond. And since each individual laser is mounted in a different orientation and angle, multiple lasers cannot strike the eye.
1000th of a second… and a very small amount of power.
There alone is a factor of 1 million-to-1 in favour of “It’s going to be fine”.
Going up a notch
Let’s say we swap all this out for a Class II laser, that’s up to 1mW (1000th of a Watt) and using Visible light. Even if we did that, damage would only start to occur after 1000 seconds of continuous exposure to a single spot within the eye. That is sitting very still… still enough for a Victorian photograph – nearly 17 minutes!
Ramp it up another notch to Class IIIa lasers and up to 5mW of power and finally we start to see some medical evidence of damage to eyes if exposed to certain wavelengths for several continuous seconds..
Up another notch to Class IIIb and yes, damage is probably going to occur because that includes 5mW up to 500mW – or half a Watt of power, and Class IV is where we start to burn holes through things and shine beams on other planets.
The length of a wave is directly proportional to how much you want the person to leave
From your school years, you may recall that if you decrease wavelength, then frequency of the wave increases proportionally. The higher the frequency, the shorter the wavelength, and vice versa.
What most people do not know, however, is that the energy level also changes.
That means that visible light, for example, has more energy in its photons than the longer wavelength, higher frequency energy in the infrared band.
Let’s take the example above of the Velodyne Pucks, which operate at a wavelength of 905 nanometres, compared with visible light, which ranges from 390 to 770 nanometres (or nm) or roughly one third to three quarters of one thousandth of one millimetre. That’s even smaller!
Because the ratio between wavelength and energy is exactly the same, we can just do the sum with the wavelengths to give us the correct ratio of how the energy level would also change. If you have more cats (wavelength), you’ll also need more cat-food (energy).
905 divided by the range of visible light 390 to 770, gives us another multiplier … from 2.3 to 1.7.
That means that in addition to our time factor error of 1 million, we have another factor, i.e. of the beam wavelength difference conveying a different amount of energy. So, at visible light wavelengths, there’s actually about twice as much energy travelling down the laser beam as is found in this infrared unit. Since you’re still persevering, let’s round it off at 2.
Finally, we can multiply our energy factor by the time factor… and we get to 2 million!
You’d need to be within a few metres of 2 million lidar units, and even at the rate of growth we’ve seen in the lidar market recently, even that would be a challenge. Before we conclude, the human body is also self-healing, so if we were to have this weird miracle occur, it would only take a few days for the eye to fully recover.
In summary, no, we’re not all about to go blind from Self Driving Cars.
Certainly not helped by the apocalyptical scenes of driverless cars being taken over to hinder our heros’ progress in Hollywood movies, the much publicised ‘Jeep Hack’ – an attack on a Jeep Cherokee which took control of some functions in 2016 – actually took 6 months to plan and implement, time which included a significant amount of direct access to the vehicle in question.
As was succinctly pointed out to me by a senior engineering leader, ‘Engineers don’t make things to fail’. They are manufacturers using scientific method. If there’s evidence to say that the existing approach does not work, the approach needs to be modified.
It might be distasteful to consider the following, but I ask you to spare a moment as we venture into the dark soul of a nefarious criminal, assassin or scruffy-haired ne’er-do-well.
“Yes Mr Big, you want Bob Jones to be eliminated, I understand. Well, of all the methods I would choose, I would definitely use the most fiendishly complex and time consuming, and definitely not use a blunt object, readily available firearm or knife, or just set a fire in their basement”
Evidently many years watching Criminal Minds have served me well.
Anyway, I digress… the problem is that, given these are technology systems, the pay off needs to be technologically motivated. Sure, there are weaknesses in any system designed to communicate data from one place to another, and with a vehicle designed with sensors, many companies making the hardware and software therein are increasingly paying attention to how those processes could be hijacked, but add in to the mix multi-sensor validation, and the problem moves from edge case to almost impossible.
Let’s say you can fool a vehicle’s vision system into misreading a road sign (see the genuine example here of one algorithm thinking the cat is actually guacamole – the image has been modified specifically to defeat the computer vision system).
The same driverless system also has other data sources: lidar and radar to detect traffic density and enact safe operating distances, mapping data including junction configurations, and in the future, wireless connectivity to traffic management systems.
The future of ‘over the air updates’, now a regular feature in new cars, from an ever increasing number of manufacturers, coupled with the ability to alter how the processors around the vehicle actually behave (so called ‘Field Programmable Gate Arrays, or FPGAs), mean that any alterations to the system can be fixed easily, any software amendments authorised against a community of other vehicles, any unathorised attack detected.
That means that, much like your smartphone or friendly neighbourhood laptop or tablet, security updates will be downloaded without your knowledge with validations carried out against remote encrypted checking files.
It’s not a great stretch of the imagination to consider that blockchain-like technology could be used to validate updates against the server at your local dealer or even the other cars in your street.
Hacking cars is just so fantastically complex, and so utterly pointless in comparison to simpler means to control or attack, and so easily defeated, that the realms of practicality simply don’t extend to include it as a worthwhile exercise for your unfriendly neighbourhood gangster or your international megolomaniac.
Do you work as a driver?
Autonomous cars will arrive with a whimper. As I have written before, the nature of physically manufacturing and shipping a vehicle to a buyer is a complex task but multiply that task across the population of the world and you have a revolution that will take 20 years to get half way and perhaps 30-40 years to conclude.
In the context of technological progress, that’s not particularly fast, and thankfully increases the visibility of this issue across the education and skills supply chain (i.e. schools and education) as well as providing plenty of warning for those who are professionally employed as vehicle operators, to the imminent change of vehicles ‘going driverless’.
Few of the jobs I have enjoyed in the past 15 years existed when I was at school, largely because they have revolved around technologies invented after I left. Does that mean that I won’t be able to survive the technologies that have not yet been invented? No, of course no, it just means I shall adapt.
Despite some indications to the contrary, notably in politics, humans are actually quite good at learning and adapting to their environment, i.e. not just surviving, but thriving – despite challenges.
Higher-skilled drivers, and those that offer additional services, will have a place for a few more years even after driverless vehicles arrive, as people with luggage or special access requirements may still need help dismounting vehicles, at least until the cost of replacing them drops to a point where technology can provide a better service at lower cost.
That, you see, becomes the pivot point. Not just of the direct cost of technology development, but the indirect costs of providing a robust interface to the real world for users of the system, a driverless Uber that can automatically deploy a ramp or help me stand up, leaving my luggage at the entrance to the hotel lobby.
How many grocery delivery drivers will lose their jobs if the current convenience of ‘mouse click to kitchen delivery’ turns in to ‘mouse click to tedious slog from the road side driverless van’?
Consumers would be worse off, so drivers are safe – for now.
The technologies which enable autonomy are all advancing apace, between shrinking processors, sharper sensors and smarter software, but the shape and nature of economics will not change.
Customer service and convenience will dominate the evolution, reducing cost and increasing business profitability will pay for it, and neither can be compromised for the sake of replacing humans with robots.
To that end, remember that technology does change, but human nature does not.
It is in our habit to live unwanted change through the five stages of grief – denial, anger, bargaining, depression and acceptance – when it’s easier just to look at the cold hard facts.
Yes, things will change, yes, life will be different, yes, jobs will be lost… but actually, things won’t change all that suddenly, jobs will actually change and society will adapt and overall, life will go on just as it always has, surviving and thriving despite the seismic sociological shifts that new technologies trigger.
Want to find out more about the technology in self driving cars? Come along to our Self Driving Track Days workshop in Milton Keynes on 10 July >>
Please consume responsibly! This article is meant as a call to arms and lightweight introduction to lots of different ideas. If you want to learn more, come to an event and talk to the real experts – but all the same, we hope you enjoyed it and remember kids, check your facts! – Alex Lawrence-Berkeley