Goal! Lessons learnt from robots playing football

Sander van Dijk, Head of Research at Parkopedia. Delivering a workshop on 10 July on “Getting your Deep Neural Network into a car”

The next in our series of interviews with workshop leaders at Self Driving Track days introduces us to Sander van Dijk. Sander is Head of Research at Parkopedia, and will be delivering a workshop on 10 July on “Getting your Deep Neural Network into a car”

Can you tell us a bit more about your work as Head of Research at Parkopedia?

At Parkopedia our goal is to improve the world by delivering innovative parking solutions. This means capturing and providing any information that is useful to a driver looking for a place to park: locations, both off and on-street, opening hours, restrictions, prices, etc., but also, very importantly, the probability of you actually finding a space in any of those locations. The research team at Parkopedia develops advanced machine learning models to predict this probability accurately worldwide, using a wide range and large volume of input data. Furthermore, we build deep learning-based computer vision methods and other recognition techniques to help automate and ramp up our growth in coverage. Besides that I am part of the team at Parkopedia that works to enable autonomous valet parking.

You have also worked on making humanoid robots play football and are impressively three-time vice champion in the RoboCup world cup. Can you tell us more about how you got into this? Why do has this been such a successful past time for you? What can others do if they want to get involved?

While I was studying Artificial Intelligence at the University of Groningen in The Netherlands, a friend and fellow student mentioned some robotics competition coming up in Bremen, Germany, just over the border, and that we should enter as a team. This was the RoboCup world championship of 2006, and we entered the 3D simulation league, where robots play football autonomously in a physical simulation. We ended up being completely trounced, but we loved the experience and the mix of a friendly, scientific, and competitive community. The next year we participated again, in Atlanta, Georgia, but this time we came more prepared and managed to claim second prize. I have been competing since; I came to the UK in 2009 to start my PhD at the University of Hertfordshire, after meeting my then future supervisor at RoboCup, and became vice world champion again in the simulation league with my new team there, the UH Bold Hearts. A few years later we decided to turn it up a notch, and put together a team of real physical robots to join the Humanoid football league, and in 2014 took the second prize trophy in that league at the world cup in Brazil.

The main aim of the RoboCup is to tackle the very hard problem of having robots and AI operate successfully and reliably in a dynamic world, not just on a discrete and structured chess or go board. This is not only to be able to play football, but will help enable robots and AI in daily life, such as in care, rescue, and autonomous driving. I believe the reason of our success in RoboCup, and of the other amazing teams that kept us from the first place, is robustness. The robots always need to be able to do something, even in the case of failure or unexpected situations, and for the full length of two 10 minute halves. One guideline we follow for this is to only introduce new methods that degrade gracefully: if a new fancy team strategy fails because of broken communication, each robot should still be able to play on its own.

Anybody who would like to get involved in RoboCup can have a look on their website, www.robocup.org, or reach out to our team the Bold Hearts on any of our social media accounts.

Did you learn anything in this hobby that you have applied to your work at Parkopedia?

In RoboCup you often are in a situation having to create/update/fix a complex intelligent system under pressure: a lot of magic happens in the 5 minute breaks between halves. In a dynamic company like Parkopedia it sometimes can feel similar, when combining something unconstrained like research with tight business deadlines and expectations. The aspect of robustness, and incremental, ‘agile’ research helps to keep things under control while making quick progress. The lessons learned from RoboCup will especially apply to our project to create an autonomously parking vehicle. Benefits however flow the other way as well, by introducing industry quality processes to our team mainly made up of students, based on our experience at Parkopedia.

Deep learning is all the hype now, not the least in the world of self-driving cars. Do you think this hype will continue? What do we need to watch out for?

Deep learning is here to stay for a while. It has revolutionised a range of research fields, cutting down the time frames in which even experts in those fields thought such achievements were possible. There are still plenty of fields and problems open to applications of deep learning, and the amount of research output on deep learning is massive. What is very special to this research is the open availability: because the field moves so fast, much of the research is made available openly on platforms such as arXiv.org. This makes it possible for everybody to have access and contribute, not just for the few who are at an institution that can afford to get access through overpriced paywalls.

This is amazingly powerful, but also circumvents the checks of traditional science: although the system of peer review is not without faults, without it it is now very easy to put works out there that look scientific, but may be plagiarized, not possible to reproduce, or just simply bad research. Such works can set false expectations and add negatively to the hype.

Besides this there are still questions around deep learning that haven’t fully been answered: why does it really work so well (it is not uncommon that the original intuitive explanations of some of the smartest authors proved to be wrong later on)? Can we make useful claims about why a deep network makes a decision, especially if something goes wrong? Can we get it to learn useful things from just a small sample of data, like humans do?

In your Self Driving Track Days workshop you will have a look at the different shapes and forms of deep neural networks that are out there that you may want to use on a self-driving car, what is the biggest consideration in choosing the right network design?

The most important, as to any machine learning problem, is to choose a model that fits your problem. This involves seemingly simple decisions like choosing either a classification or a regression network. But a counting problem can for instance be modeled as either, so you have to think about which is more suitable to your use case. An important consideration in that sense is your cost function: what output is good and what output is bad for you? If counts close to the actual value can be good enough, you may choose regression. If any count except the actual value is as bad, you may choose classification. Besides that, it is useful to think about the format of the output to: do you really need the pixel-by-pixel mask of a Mask RCNN, or are bounding boxes sufficient? Especially with limited hardware and real time constraints, it is useful to keep things as simple as possible, and maybe even to reframe your problem to make it possible to run a much cheaper model instead.

What are you most looking forward to about leading this workshop?

I am looking forward to meeting all the people interested in the exciting area of self-driving cars, and deep learning specifically, and to have interesting discussions about the recent developments, opportunities and limitations. I hope the participants will come away with some useful knowledge and ideas for applying these techniques in practice, including myself as there is always more to learn in this fast paced field!

If you want to hear more from Sander, he will be join Self Driving Track Days to deliver a workshop on “Getting your Deep Neural Network into a car” – View the full agenda here >>

Meet Meena, Functional Safety Consultant at Intel

Meena Rajamani, Functional Safety Consultant at Intel Corporation Ltd
Meena Rajamani, Functional Safety Consultant at Intel Corporation Ltd, UK. Delivering workshop on “Functional safety considerations for autonomous vehicles.”

We recently caught up with Meena Rajamani, Functional Safety Consultant at Intel Corporation Ltd, UK who is delivering a workshop on “Functional safety considerations for autonomous vehicles” at Self Driving Track Days in Milton Keynes on 10 July 2018.

You have over 17 years’ experience in the automotive industry, in various parts of the world, and during this time have you noticed any differences in geographical approaches to automotive developments?

Each OEM reflects the region’s culture in its strategy, especially in its R&D investments and new market entry strategy. Nowadays, with the increasing acquisitions across the globe and autonomous driving technology revolution, the geographical differences are narrowing.

We run a lot of initiatives that are aimed at supporting Women in Engineering, what tips would you give to a woman starting out in engineering today?

In recent times, the engineering world is more aware of incorporating diversity in its workforce. Therefore, there are less barriers and stereotypes to overcome, compared to 20 years ago. Start your career confidently without any inhibitions; do not curtail your ambitions and above all, don’t be afraid to showcase your abilities!. In many companies, like Intel, there are a number of initiatives targeting diversity and inclusion – make use of it.

You have worked in embedded control applications such as infotainment, powertrain, chassis control, body and ADAS in technical, management and process areas, what is your current focus area at Intel?

At Intel, we have diverse portfolio of products for different markets. Intel and Mobileye design autonomous driving solutions with a focus on safety and scalability.

We are leading the industry in camera-centric sensing, crowd-sourced mapping, mathematical models for safety, and compute leadership. General purpose processors such as Intel’s Core and Xeon products as well as custom accelerators like Intel’s Neural Network Processor, FPGAs, or Movidius VPU enable AI solutions across the computing spectrum. My current focus area is FPGAs and also platform solutions with CPU and FPGAs for all types of workload including AI.

Could you share some details on your work on microcontroller and FPGA solutions for use in safety applications, which applications are currently the most interesting?

While the safety considerations of using MCU and FPGA in a conventional car in infotainment, powertrain and security solutions are interesting as always, the challenges of safety in an autonomous car is the most intriguing and interesting part of my work.  Designing and implementing safety as part of the architecture definition and the varied usecase considerations including electric cars poses some interesting challenges at work.

What are you looking forward to and hoping to get out of leading the functional safety workshop at Self Driving Track Days?

I am looking forward to meeting people from the automotive industry and share knowledge on functional safety between different players in the automotive value chain. I expect it also to be an interactive platform to discuss some, hitherto unanswered questions related to the fully autonomous, artificially intelligent, future car architectures.

Join Meena on 10 July 2018 at the Daytona Karting venue in Milton Keynes. Book your tickets here >>

Technical Workshop agenda announced for 2018

Six exciting ½ day workshops from industry and academia – Cranfield University, WMG, Parkopedia, Horiba Mira present deep-dives into important subjects, alongside industry partners on our popular Introduction to self-driving technologies workshop, which will feature segments from sensor experts, software and algorithm developers and other suppliers to the autonomous vehicle technology ecosystem.

The one-day training event and exhibition takes place on July 10 at Daytona Milton Keynes, and is the perfect event for technicians, engineers, new entrants or specialists in autonomous vehicle technology fields, with training delivered by the professionals working on autonomous vehicle projects right now, from commercial research and development to scientific endeavour.

View Workshop Programme

Technical training workshops

The in-depth workshops, delivered by leading researchers and scientists from across academia and industry include:

  • Introduction to self-driving technologies workshop – Led by our industry partners
  • From Robotics to Computer Vision: Self-Localisation in Autonomous Vehicles – Led by Dr Lounis Chermak, Lecturer in Imaging, CV and Autonomous Systems, Cranfield University
  • Getting your Deep Neural Network into a car – Led by Dr Sander van Dijk, Head of Research, Parkopedia
  • Understanding Automotive Cybersecurity – Led by Dr Madeline Cheah, Senior Cybersecurity Analyst, Horiba Mira
  • Sensor hardware, sensor fusion and relevant test methods for use in driverless vehicles – Led by Dr Valentina Donzella, Smart Connected and Autonomous Vehicles MSc leader, Intelligent Vehicle Group, WMG
  • Functional Safety Considerations for Autonomous Vehicles – Led by Meenakshi Rajamani, Functional Safety Consultant, Intel

The Introduction to self-driving technologies workshop, first run in 2016, will feature up to eight companies presenting not just their own technologies, but also explaining their place in the eco-system, challenges and applications, as well as expected industry developments in their sector.

Presentations from our event partners and sponsors, will give attendees a complete overview of what goes into making a self-driving car – Self Driving Track Days is the perfect complement to a theoretical or online training programme, bringing the virtual into reality.

View Workshop Programme

Autonomous technology demos

Along with a mini-exhibition, the venue also features a demo area where attendees will get a chance to see autonomous vehicles up close, talk to the people that work on them, and even an opportunity for demonstration drives.

It’s a great event, with a fantastic, positive atmosphere”, comments Conference Director, Hayley Marsden, “I’m really thrilled with the level of interest we’ve already picked up and am delighted to bring this event back to the home of connected and autonomous vehicles, Milton Keynes, so close to the UK’s industrial heart.  The agenda is strong and relevant, and it’s satisfying to know we’ll have a motivated technical attendance from the UK and beyond.

Fully catered, the event will continue into the evening with an informal business networking event, which will include a BBQ dinner and Go-Karting (charged separately).

Autonomous Demos
Book your place – Late Fee begins 29 June


Cybersecurity is a mind-set

Madeline Cheah, Senior Cybersecurity Analyst, HORIBA MIRA is delivering workshop on “Understanding Automotive Cybersecurity”

Delivering an in-depth workshop at Self Driving Track Days in July, HORIBA MIRA have taken time out to write an article for us on cybersecurity. Madeline Cheah, Senior Cybersecurity Analyst will be delivering “Understanding Automotive Cybersecurity”


Cybersecurity is really about the absence of behaviour – that is to say – undesirable behaviour.  It is compounded by the fact that it is hard to test for absence, and many of the vulnerabilities – whether that be design choices or implementation errors – are unintended.  How would you test for an absence of unintended behaviour?

There is no such thing as perfect security.  If we use the simple example of car theft – a criminal who really wanted to take a specific car would just pick it up and put it on a truck.  Instead, the aim is to make a system infeasible to attack, in the hope that attackers will go elsewhere.  And if every single system was infeasible to attack, then, in an ideal world, all would be protected.

However, this is not an ideal world, and what we have instead is a footrace between attacker and defender.  This is exacerbated by the attacker-defender imbalance, whereby the attacker only has to find one vulnerability to exploit, but a defender has to protect as much of the system as possible.  There is always a balance to be struck between what’s usable, what’s cost-effective, what’s reactive and what can be proactive.

Automotive Cybersecurity

There are several drivers in the automotive industry that have led to challenging issues in the cybersecurity arena.

Firstly, there is increased connectivity, both inside and outside the vehicle.  Secondly, there is increased complexity, with the advent of many systems that have been introduced for reasons of safety, security or marketability. Finally, there is convergence of technologies, between those that were designed for vehicles (for example advanced driver assistance systems), and those that were integrated into vehicles (wireless Internet connectivity).

There are no “strong” or “weak” parts of a vehicle; even what might seem like a trivial attack can be chained with other attacks to make the end impact potentially devastating.  Furthermore, with every feature that makes something more safe or convenient, there is potentially an equal amount of convenience for an attacker if sufficient defences are not in place.

Instead we talk about hardening the system, where we close as many security holes as feasibly possible.  There are some fundamental principles we can follow as a guide. We can use defence in depth, where there are multiple layers of security, such that holes in one layer are covered by another layer.  We can use the principle of least privilege, where by default, nothing is allowed, with the necessary functionality enabled one by one.  We can ensure that security is in a system by design, rather than being retro-fitted, such that the holes don’t appear in the first place.

System Boundaries

Currently, we can realistically draw a system boundary around the vehicle for the purposes of testing or analysis.  However, the horizon is full of vehicles that are connected to each other, to the cloud, to infrastructure, to peripheral devices and wearables. The vehicle will not just be the end target, but the means to an end, whether that be for a backdoor into infrastructure, ram-raiding, terrorism, privacy violation, financial crime and all other criminal activities that now take place through more conventional computing methods.

Cybersecurity in many ways is a mind-set.  Not everyone has to be aware of every technique or attack, but knowing when and where to involve a security engineer is crucial. Even if the actual product itself is purely mechanical, sooner or later, it will be attached to or integrated with an electronics system.  Presumably, the use of computing and IT is used for designing the product, for protecting the intellectual property attached to a product and for data analytics. Away from automotive technology, cybersecurity awareness can take many forms in many disciplines.  In design disciplines, we look at developing security that lay users can use to protect themselves. With psychology and linguistics, it could allow us to distinguish various types of threat actors.  Understanding of the law could help with questions to do product liability.


Here at HORIBA MIRA, we aim to be a trusted partner to manufacturers. That means working collaboratively with the project team every step of the way. Our cybersecurity related projects cover consultancy, concept development (security by design) and independent security assessment, whether at component, vehicle or lifecycle level.

We are also active in the research sphere, whether that be through collaborative projects funded by InnovateUK (such as 5Stars), through applied research internally, or through embedded PhD programmes in the business.  HORIBA MIRA is also heavily involved with the development of international standards in the field, with our experts representing the UK in the development of ISO/SAE AW 21434 (Road Vehicles – Cybersecurity Engineering) and ISO26262 (Road Vehicles – Functional Safety).

Join HORIBA MIRA at Self Driving Track Days in Milton Keynes this July. Book your tickets to attend >>

Will Driverless Cars Grow up in 2018?

From the driverless vehicle hype seemingly hitting a peak in 2017, this year began with the news that Waymo, a subsidiary of Alphabet (Google’s parent company) and ride-hailing giant Uber had settled their legal wrangle, with the former taking a sizable chunk of the latter.
On many fronts, this is great news – intellectual properties have been successfully protected, and two of the largest investors in trying to find solutions to the autonomous vehicle problem are now aligned.
Waymo’s rapidly developing test and development vehicle fleet is spreading across the US, and despite Uber’s own bruising encounter with the regulators in the UK, the company’s R&D division ‘Uber ATG’ has continued to invest and make strides towards creating robot drivers. The much publicised fatality involving one of Uber’s autonomous vehicles in March 2018 has sent a much needed shockwave across the industry, and will – despite the tragedy itself – undoubtedly serve to improve safety practices, virtual and closed-course testing.
At CES in January, the world’s press started to understand the scale and complexity of autonomous driving, being able to compare not just two, but more than a dozen companies demonstrating their technologies in one place – and get a taste of the differences between those that are starting on the journey, and those that are nearing the end, with viable products entering the marketplace in the not-too-distant future.
In the UK, the Government’s programme of funding groups of companies to embark upon research projects will ‘start to end’, with the last of half a dozen funding rounds opening for bids this year and dozens of the previously funded projects reaching a zenith in 2018, with public trials, demonstrations and public exhibits.
With more than 50 projects already match-funded by the taxpayer and various consortium members made up of academia, industry players, public sector and innovative new organisations and startups venturing in to this field, it’s reasonable to say that this joint investment by the public and private sectors has enabled the UK – if not to take up the global lead – at the very least, reduce the gulf behind countries such as Germany, Japan and the US.
There are various international ‘leader boards’ which rate country’s readiness for autonomous vehicles, all of which now agree the UK is Top 10 material… imagine where we were before £100s of millions of investment!
It’s important to note that government investment won’t ‘fall of a cliff’, because there will continue to be opportunities for funding support through tax incentives for R&D (through Innovate UK), promotional support to international trade shows (through the Tradeshow Access Programme), networking events for numerous specialisms run by the Knowledge Transfer Network and the various industry Catapults).
For consumers, however, none of that matters – other than perhaps a smattering of national pride. For you and I, as buyers of cars, what do driverless cars actually mean? Will we go out and buy one?
The story of driverless cars, first and foremost, is about safety.
Don’t get hung up on a computer taking away your pleasure in driving… merely replacing the driving that is not enjoyable, and perhaps help you avoid accidents when driving is difficult, in bad weather, when you’re tired, near a school. No driver, anywhere, could begrudge that, or want to put others in harm’s way.
Improved road safety alone can easily repay these investments, just for the UK… even discarding the commercial opportunities in developing and exporting national expertise. Let’s take a number of annual road fatalities in the UK at 1700 people. DfT figures tell us that the cost to the taxpayer, for each fatal accident, is about £1.8m, so multiplied that up, and that’s £3b every year, lost.
The real figures (both of fatalities, and of cost to the taxpayer of each one) are higher… but indicatively, you can see the return on investment into CAV technologies is clear. Nobody is talking about the 4000 fatalities caused by human drivers on the same day globally, or the multi-billion £GBP negative impact of that.
Aside from that, the people that write, enforce and are governed by various laws covering vehicles (actually more than 100 statutes going back hundreds of years) also need to understand the new technologies, and insurance companies need to understand where liability might end up in the future. Other businesses, such as those that employ drivers on long or short distance deliveries, need to understand how their business or employment models might change, and the education system needs to adapt too.  It’s those ideas which will form the foundation of a new series of web interviews we’re producing on AutoSens TV.
We have been playing around with autonomous vehicles, and helping to teach people about them, for a couple of years now, but 2018 will provide the largest platform to date to talk about autonomous vehicles to the Great British public, as the London Motor Show will have a new feature, a dedicated Autonomous Vehicle Zone.
For many of the 60,000 expected visitors, it will be their first encounter with autonomous vehicles, as they have mostly been confined to test tracks, workshops and secret test facilities.
But at a motor show at ExCel, in London? CES is the Consumer Electronics Show (with cars), the Detroit Motor Show now has a technology showcase… the worlds of technology and automotive have combined.
When you consider that the Goodwood Festival of Speed’s most popular stand was Tesla, the time is right for the UK’s biggest car events to embrace autonomy and ‘close the loop’, showing that the UK still has ambition to progress on the international stage.
Cars are connected and more automated than they’ve been before, and I’m excited to see how the public reacts to the next generation of technologies that will be seen on our roads.
If you can’t wait until May then there are lots of free newsletters you can sign up to, for example by Robohub, MCAV or Innovative Mobility Research – and if you really want to get your hands dirty, there’s always Formula Pi – scale model driverless cars that you can code and race your friends and even compete against other people remotely – or Self Driving Track Days, where engineers gather to share their technical insights to help accelerate the spread of knowledge to new companies across the supply chain.
In 2018, there’s really no excuse not to get out and learn about this new wave of exciting new technology, because it’s well on its way.

Driverless cars will burn out your eyes and eat your soul

All the nonsense in one place; busting the myths about Self Driving Cars for your easy consumption.  And please remember to fasten your seat belts before take-off.

  1. The Trolley Problem
  2. Driverless cars will blind us all with laser beams!
  3. Cars will be easy to hack
  4. We’ll all lose our jobs…

The Trolley Problem

Turn left and mow down a group of children, or turn right and hit a tree, killing the vehicle’s occupants.

The modern version of the trolley problem, created as a thought experiment, has often been applied to autonomous vehicles as a warning on the complexity of decisions they will supposedly have to make.

In a nut-shell, this is a daft problem which provides a philosophical approach to an ethical dilemma without taking into consideration very important and fairly simple facts.

  1. Cars are already engineered to protect their occupants far better than they are engineered to protect pedestrians or other road users. The safest car on the road (the Volvo XC90), measured by Euro NCAP, has a 98% safety rating for occupants and a frankly appalling ~75% rating for protecting pedestrians. In fact, not one car goes above 85% for pedestrians.
  2. Given a complex situation, a driverless car will just come safely to a stop. Waymo testers are well known for their ingenuity and creativity in trying to defeat their own cars’ ability to perceive their environments, and when faced with the ‘impossible to plan for’…  In one case coming across a crowd of people dressed as frogs hopping across a road, Waymo had programmed the vehicle to just gently brake until the route was clear.
  3. Nobody will ever get into a vehicle that favours the safety of someone else over you. Credit to writer and commentator Alex Roy for that, but it’s true… unless you’re getting into a vehicle with the express purpose of never getting off, like the Suicide Roller Coaster. Rather than considering a vehicle that’s safe for everyone, we are not generous enough to think about buying a vehicle that’s not safe for us and only safe for people around us. After all, we want to survive the journey to McDonald’s.

Of course, the nature of psychologists is to disagree, and the nature of philosophers is to question the meaning of everything. Am I right, wrong, is that a question or a lump of cheese?  What does it all mean?

Driverless cars will blind us all with laser beams!

Wrong, wrong and wrong! Actually, wrong by a factor of 2 million, and I’ll explain why.

For this, we can fall back on technology facts and the laws of physics, which thankfully are not subject to votes, gerrymandering, or sinister algorithmic fake-news bots controlled by scary government forces.

Any device using a laser must be allocated to Classes, with 1 at the bottom and 4 at the top. Lasers you find in CD players and Laser Copiers… and Lidar units, are all Class 1.  That’s the power or frequency at which there is ‘no possibility of eye damage’, i.e. there have never been any recorded cases of damage, even under extended exposure.

But that doesn’t tell us why.. or indeed what could happen if another Class of laser were used.

To be effective, a Lidar unit (Light Distance and Ranging) emits a tiny bit of energy on the infrared part of the electromagnetic spectrum. This energy is focused into a tight beam (i.e. turned into a laser) and then steered around a scene, by a mirror, a motor, or by a tiny device which combines those two things, called a MEMS.  The energy, in the form of photons, travels out from the Lidar, hits and object and then some of those photons bounce back.  The unit (and associated processing and software) then measure the timing and location of those bounces, and similar to your eyes, builds a rough 3D picture.

While Lidar is a fairly mature technology, having been theorised first in the 1930s and used in the Moon landings in the 1960s, it’s going through a few big changes. Arguably the two most important being the idea to make them move from a single mount (most commonly spinning lidars, as seen regularly on autonomous R&D vehicles) and the second, happening right now, the move away from this and towards ‘solid state lidars’, where a wide field of view can be taken in by multiple smaller, cheaper sensors, rather than several big, complicated expensive ones.

Either way, the maths we do is roughly the same, so we’ll use the example of the well known ‘Velodyne Puck’ which is widely used, and Velodyne has usefully published detailed specifications for.

Each sensor has between 16 and 128 lasers, which rapidly rotate at approximately 10 hertz, or 10 full revolutions per second, creating about 30,000 beam points.

One Watt. One James Watt, to be exact.

Each laser pulses at a wavelength of 905nm (that’s just below 1/1000th of a millimetre) with power of 1 or 2 mW. That’s roughly the same energy needed to lift up a single grain of rice.

For comparison purposes, this is about 1/5,000th or 0.02% of the power output in your standard 10-watt LED headlamp bulb on a low-beam setting.

Any single laser beam would sweep across an inadvertently glancing eye in approximately 1 millisecond. And since each individual laser is mounted in a different orientation and angle, multiple lasers cannot strike the eye.

1000th of a second… and a very small amount of power.

There alone is a factor of 1 million-to-1 in favour of “It’s going to be fine”.

Going up a notch

Let’s say we swap all this out for a Class II laser, that’s up to 1mW (1000th of a Watt) and using Visible light.  Even if we did that, damage would only start to occur after 1000 seconds of continuous exposure to a single spot within the eye. That is sitting very still… still enough for a Victorian photograph – nearly 17 minutes!

Driverless cars at a disco…

Ramp it up another notch to Class IIIa lasers and up to 5mW of power and finally we start to see some medical evidence of damage to eyes if exposed to certain wavelengths for several continuous seconds..

Up another notch to Class IIIb and yes, damage is probably going to occur because that includes 5mW up to 500mW – or half a Watt of power, and Class IV is where we start to burn holes through things and shine beams on other planets.

The length of a wave is directly proportional to how much you want the person to leave

From your school years, you may recall that if you decrease wavelength, then frequency of the wave increases proportionally. The higher the frequency, the shorter the wavelength, and vice versa.

What most people do not know, however, is that the energy level also changes.

That means that visible light, for example, has more energy in its photons than the longer wavelength, higher frequency energy in the infrared band.

Let’s take the example above of the Velodyne Pucks, which operate at a wavelength of 905 nanometres, compared with visible light, which ranges from 390 to 770 nanometres (or nm) or roughly one third to three quarters of one thousandth of one millimetre. That’s even smaller!

Because the ratio between wavelength and energy is exactly the same, we can just do the sum with the wavelengths to give us the correct ratio of how the energy level would also change. If you have more cats (wavelength), you’ll also need more cat-food (energy).

905 divided by the range of visible light 390 to 770, gives us another multiplier … from 2.3 to 1.7.

That means that in addition to our time factor error of 1 million, we have another factor, i.e. of the beam wavelength difference conveying a different amount of energy.  So, at visible light wavelengths, there’s actually about twice as much energy travelling down the laser beam as is found in this infrared unit.  Since you’re still persevering, let’s round it off at 2.

Finally, we can multiply our energy factor by the time factor… and we get to 2 million!

You’d need to be within a few metres of 2 million lidar units, and even at the rate of growth we’ve seen in the lidar market recently, even that would be a challenge.  Before we conclude, the human body is also self-healing, so if we were to have this weird miracle occur, it would only take a few days for the eye to fully recover.

In summary, no, we’re not all about to go blind from Self Driving Cars.

Cars will be easy to hack

Certainly not helped by the apocalyptical scenes of driverless cars being taken over to hinder our heros’ progress in Hollywood movies, the much publicised ‘Jeep Hack’ – an attack on a Jeep Cherokee which took control of some functions in 2016 – actually took 6 months to plan and implement, time which included a significant amount of direct access to the vehicle in question.

As was succinctly pointed out to me by a senior engineering leader, ‘Engineers don’t make things to fail’. They are manufacturers using scientific method.  If there’s evidence to say that the existing approach does not work, the approach needs to be modified.

It might be distasteful to consider the following, but I ask you to spare a moment as we venture into the dark soul of a nefarious criminal, assassin or scruffy-haired ne’er-do-well.

Murder most horrid. I suspect the perpetrator had a chip on their shoulder.

Yes Mr Big, you want Bob Jones to be eliminated, I understand.  Well, of all the methods I would choose, I would definitely use the most fiendishly complex and time consuming, and definitely not use a blunt object, readily available firearm or knife, or just set a fire in their basement

Evidently many years watching Criminal Minds have served me well.

Anyway, I digress… the problem is that, given these are technology systems, the pay off needs to be technologically motivated. Sure, there are weaknesses in any system designed to communicate data from one place to another, and with a vehicle designed with sensors, many companies making the hardware and software therein are increasingly paying attention to how those processes could be hijacked, but add in to the mix multi-sensor validation, and the problem moves from edge case to almost impossible.

Let’s say you can fool a vehicle’s vision system into misreading a road sign (see the genuine example here of one algorithm thinking the cat is actually guacamole – the image has been modified specifically to defeat the computer vision system).

The same driverless system also has other data sources: lidar and radar to detect traffic density and enact safe operating distances, mapping data including junction configurations, and in the future, wireless connectivity to traffic management systems.

The future of ‘over the air updates’, now a regular feature in new cars, from an ever increasing number of manufacturers, coupled with the ability to alter how the processors around the vehicle actually behave (so called ‘Field Programmable Gate Arrays, or FPGAs), mean that any alterations to the system can be fixed easily, any software amendments authorised against a community of other vehicles, any unathorised attack detected.

That means that, much like your smartphone or friendly neighbourhood laptop or tablet, security updates will be downloaded without your knowledge with validations carried out against remote encrypted checking files.

It’s not a great stretch of the imagination to consider that blockchain-like technology could be used to validate updates against the server at your local dealer or even the other cars in your street.

Hacking cars is just so fantastically complex, and so utterly pointless in comparison to simpler means to control or attack, and so easily defeated, that the realms of practicality simply don’t extend to include it as a worthwhile exercise for your unfriendly neighbourhood gangster or your international megolomaniac.

We’ll all lose our jobs…

Do you work as a driver?

Autonomous cars will arrive with a whimper. As I have written before, the nature of physically manufacturing and shipping a vehicle to a buyer is a complex task but multiply that task across the population of the world and you have a revolution that will take 20 years to get half way and perhaps 30-40 years to conclude.

In the context of technological progress, that’s not particularly fast, and thankfully increases the visibility of this issue across the education and skills supply chain (i.e. schools and education) as well as providing plenty of warning for those who are professionally employed as vehicle operators, to the imminent change of vehicles ‘going driverless’.

Few of the jobs I have enjoyed in the past 15 years existed when I was at school, largely because they have revolved around technologies invented after I left. Does that mean that I won’t be able to survive the technologies that have not yet been invented?  No, of course no, it just means I shall adapt.

Denise starting wearing a wig to get Gerald’s attention because he started to prefer blondes. She adapted.

Despite some indications to the contrary, notably in politics, humans are actually quite good at learning and adapting to their environment, i.e. not just surviving, but thriving – despite challenges.

Higher-skilled drivers, and those that offer additional services, will have a place for a few more years even after driverless vehicles arrive, as people with luggage or special access requirements may still need help dismounting vehicles, at least until the cost of replacing them drops to a point where technology can provide a better service at lower cost.

That, you see, becomes the pivot point.  Not just of the direct cost of technology development, but the indirect costs of providing a robust interface to the real world for users of the system, a driverless Uber that can automatically deploy a ramp or help me stand up, leaving my luggage at the entrance to the hotel lobby.

How many grocery delivery drivers will lose their jobs if the current convenience of ‘mouse click to kitchen delivery’ turns in to ‘mouse click to tedious slog from the road side driverless van’?

Consumers would be worse off, so drivers are safe – for now.

The technologies which enable autonomy are all advancing apace, between shrinking processors, sharper sensors and smarter software, but the shape and nature of economics will not change.

Customer service and convenience will dominate the evolution, reducing cost and increasing business profitability will pay for it, and neither can be compromised for the sake of replacing humans with robots.

To that end, remember that technology does change, but human nature does not.

It is in our habit to live unwanted change through the five stages of grief – denial, anger, bargaining, depression and acceptance – when it’s easier just to look at the cold hard facts.

Yes, things will change, yes, life will be different, yes, jobs will be lost… but actually, things won’t change all that suddenly, jobs will actually change and society will adapt and overall, life will go on just as it always has, surviving and thriving despite the seismic sociological shifts that new technologies trigger.

Want to find out more about the technology in self driving cars? Come along to our Self Driving Track Days workshop in Milton Keynes on 10 July >>

Please consume responsibly!  This article is meant as a call to arms and lightweight introduction to lots of different ideas. If you want to learn more, come to an event and talk to the real experts – but all the same, we hope you enjoyed it and remember kids, check your facts! – Alex Lawrence-Berkeley

When and why your innovation could be funded

We’ve been to a few events recently, including two that particularly stood out that were organised by the brilliant team in a small and occasionally overlooked organisation called the Knowledge Transfer Network, a unified body (from a previously multi-disciplined 15 organisations) tasked with bringing businesses together at events to share information, learn about new technologies, network and find out about Innovate UK funding calls.

These two events, Quantum Technology in Transport, and Embedded AI, showed not just the admirable variety of technology developers in the UK, from small scale to multi-national, but also highlighted the pivotal role held by funding organisations, notably Innovate UK, to act as a central facilitator and aggravator of investment into the development of new technologies over short, medium and long term.

But how do you define where technology can be used, or trickier still, when an immature technology might make an impact?

This is where a tried and tested measure called the ‘Technology Readiness Level’ can be applied.  This multi-step process, which has slight variants in Space, Energy, Defense and other sectors in the UK, EU and US defines how mature the technology is, and thus whether it would benefit from further investment to help it mature into a viable product that is commercially successful, and eventually return that investment to the investor.

In the case of Innovate UK, the investor is also the taxpayer (that’s you and me), so the commercial success of a product is both an improved tax return to the government within the country, and the potential for an improved export volume from the country, improving the balance of economic advantage.

Technology Readiness Levels

TRL 1. basic principles observed
TRL 2. technology concept formulated
TRL 3. experimental proof of concept
TRL 4. technology validated in lab
TRL 5. technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies)
TRL 6. technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies)
TRL 7. system prototype demonstration in operational environment
TRL 8. system complete and qualified
TRL 9. actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies; or in space)

Innovate UK operates mainly in the areas of TRL 3 to 7, where there’s the greatest difficulty, taking on the mantle from the national research councils – which fund lower levels of research, and the commercial sector (also known as the glory seekers!) taking on the significantly reduced risk of a close-to-perfected product.

One step further away from Innovate UK, lies in the strategy it is required to fulfil, often set a few years beforehand.  So the idea of CAVs (Connected and Autonomous Vehicles) being identified as of real importance in the early 2010s, helped to focus attention on not just one but many different areas that needed to be looked at – not just legislative (road law in the UK is very complex) but also commercial, educational, industrial and societal.

Two of the larger government departments, BEIS and DfT, had a ‘special cuddle’ and out shot CCAV, the Centre for Connected and Autonomous Vehicles), sibling to the Office for Low Emission Vehicles, to define a more detailed strategy and make some intelligent investments to start resolving these problem areas.

The large research projects, of which there have been more than 50, have investigated many of these. Since 2014, there have now been three CAV funding rounds, and so far, one separate CAV testbed round. Aligned to those, there have been funding calls for Emerging and Enabling Technologies, AR/VR plus the open innovation calls.  In short, a lot of money has been spent to stimulate further private investment (almost all of it has been match funded by industry 50/50).

Many of these will be coming along to The London Motor Show, and we expect that a few will also be coming along to Self Driving Track Days in July.

There are three more relevant funding rounds this year, including another CAV testbed round and a fourth (and probably final) CAV research round – so there’s still plenty of room for more innovation and research for bold new ideas if you want to find out more and start working in the sector.

Our top tips

  • Sign up to the KTN newsletters (as many as are relevant)
  • Attend some events
  • Read, watch and learn as much as you can!

We have lots of articles, videos and will always try to point you in the right direction, but there’s little better than getting yourself to a real event where engineers are on-hand to give you genuinely exciting and hands-on experience.


Prize draw results – Did you win?

From November to February, we have invited attendees to enter a prize draw at events that we’ve attended.

And, we have now drawn to find some lucky winners, all of whom get free entry to the event, which will include a full day of workshops, driverless technology demonstrations and of course refreshments throughout the day – taking place at Daytona Milton Keynes on July 10th 2018.

At four events in the past four months, the winners are: Guilhermina Torrao, a PhD researcher at the University of Bradford; Tim Biggs, Sensor Technologist at AVX Electronics; Pedro Machado, a Design Engineer at Sundance Multiprocessor, and; Anna Relton, who found us at the Transport for London Museum’s event, “Transport and the Future”.

We’ll be talking to all four lucky winners in the next few weeks to find out more about their own interest, experience, and ask why they are interested in autonomous vehicle technologies.

Bookings are already open for Self Driving Track Days, and we expect the agenda to be completed and announced in the spring, in the mean-time, we are also supporting work on the brand new Autonomous Vehicle Zone at The Confused.com London Motor Show, which we aim to include our trademark mix of exciting technology displays and fascinating educational insights.

As we march toward the event date, we have another prize draw – for anyone that signs up to our mailing list, for the first time, in March could be in with a chance to win a free place!

It’s simple, all you have to do is sign up with your email address to be in with a chance of winning 😊


Save the date – 2018 event date announced

Coming to the year’s end gives us a chance to review what we’ve achieved, and sometimes that means doing things differently – so we’ll do fewer but bigger events… that includes working on The London Motor Show 2018 developing a brand new exciting feature there.

It would be easy to carry on as before but as an event organiser, it’s important to take on feedback so we are moving to a UK venue in the autonomous vehicle heartland of Milton Keynes and shortening the event to one day.

We trialed this event format in Austria in July 2017 and it worked very well for attendees and sponsors (who typically have quite large overhead costs to consider in their commitment to taking part).

The event will include a number of half-day workshops, a small exhibition and of course, our driverless vehicle demonstrations, with space for 150 attendees to come and take part.  Lunch will be included, and there will be a BBQ and karting in the evening as an optional extra.

Please add the date to your diary… Tuesday 10th July 2018 at Daytona Karting, Milton Keynes.

We’ll be confirming more details in early 2018, so if you are not already on the mailing list, please sign up now.

Engineers and technologists come together to take part in unique event combining autonomous vehicle demonstrations and learning workshop

Self Driving Track Days comes to OAMTC Teesdorf, Austria on Friday 28 July 2017 for a 1-day track-side event incorporating autonomous vehicle demonstrations and a full day workshop for anyone interested in learning more about driverless vehicles and related technologies. The event will lead attendees through the basic principles of building an autonomous vehicle; design and safety implications; plus a detailed overview of the sensor, processing and software technology suppliers in this rapidly growing sector.

“Excellent, inspirational” Sanyaade Adekoya, Pat-Eta Electronics Ltd

“The event provides an opportunity for learning, networking, research and development, and collaboration, with live demonstrations of vehicles on-track, guided by the experts who built them. Sense Media has run a series of well-received demo events in the UK, France and USA – this event at OAMTC will be the first demonstration of autonomous vehicles available in Austria for members of the public to sign up to.” Comments Robert Stead, Managing Director of Sense Media.

He continues “Wherever you are based, whatever autonomous technology you focus on, whatever your resources – Self Driving Track Days is on your side to educate and provide hands-on experience for those with an interest in autonomous vehicle technologies.”

Introductory and intermediate level workshop sessions

Attendees will gain an understanding of the technologies that enable autonomous driving in our technology showcase, with discussions around AI, Sensor Fusion, Open Standards, Functional Safety and more. The workshop has been developed in close coordination with experienced technologists from across the autonomous driving industry. 3-hour sessions will be delivered by specialists from TU GrazCodeplay Software, AIMotive, Horiba Mira, with further contributions from AutonomouStuffNovAtel, Quantum, and a joint demonstration from NXP and Intempora.

“A really useful and informative day. The sponsors were very helpful and thought provoking. Simply too much to take in on just one day and I thought of so much more to ask about while travelling home” James Hardy, University of Derby

View the full agenda >>

Autonomous Vehicle Demonstrations

In addition to the workshop, participants will have the opportunity to ride in an autonomous vehicle demonstrating a range of sensors and communication technologies. This unique opportunity will give attendees a real insight into the operating principals and practical considerations of building autonomous vehicles.

Visitors and press are invited to see first-hand what the latest technology can do, how the vehicle interacts with other vehicles and environment, and talk to the engineers developing the full vehicle systems. In this intimate venue dedicated to hosting Self Driving Track Days, attendees have time to get up close and hands-on with the vehicles to learn about how they operate.

Live demonstration of ADAS sensors in a fully equipped test car, on the test track by TU Graz. The Series production BMW 640i has a full ADAS package, including:

  • ADAS sensors:
    • Continental ARS 308 combined short/long range radar in target and object mode
    • Mobil Eye mono camera
    • Cohda MK4 Car2X sensors
  • Reference Measurement system:
    • RTK-GPS
    • Genesys ADMA (Inertial Measurement Unit)

Find out more about the autonomous vehicle demonstrations >>

“Friendly atmosphere combined with leading technology and specialists.” Joseph Griggs, Brunel University

This full day including demonstrations and workshop is priced at only €99 for the Early Bird price until 7 July, €159 thereafter. BOOK HERE >

Top reasons to attend Self Driving Track Days

  • Mini-exhibition – free to all attendees
  • Full day of introductory to intermediate training workshops
  • Ride in an autonomous vehicle, experience the technology in action, first hand
  • Gain an understanding of the technologies that enable autonomous driving in our technology showcase, with discussions around AI, Sensor Fusion, Open Standards, Functional Safety and more
  • See demonstrations of these technologies in action in the onsite exhibition
  • Create your own customised agenda, choosing from our range of workshops for beginner to expert level
  • Put your questions to the experts to expand your knowledge of autonomous vehicle engineering
  • Meet engineers working in the sector who are keen to develop new relationships to progress driverless vehicle development
  • Take advantage of extensive networking refreshment breaks to make new connections

Find out more at www.selfdrivingtrackdays.com and book your place for just €99 before 7 July