We caught up with Intrepid ahead of the Self Driving Track Day series on 10 July. Intrepid are sponsoring the event, they told us all about their thoughts on why ADAS and autonomous vehicle developments represent such good opportunities and what can we expect to see in the automotive Ethernet space in the next 5 years.
How do you describe your company to a new connection?
Intrepid’s main role is providing professional development tools for OEMs and Suppliers in the automotive electronics development area. The professional tools make the engineers and technicians productive meaning that they do their job faster. So find and solve problems faster or verify everything works faster, etc. Intrepid has always been able to come up with great software, hardware and direct support that has made a big difference for our customers.
Why does ADAS and autonomous vehicle development represent such a good opportunity?
ADAS/Autonomy is as big of opportunity as when the Internet was created or the mobile smart phone. It will change life as we know it. This means there will be a huge amount of investment in the industry. This presents a great opportunity for companies to grow and people to learn and do amazing things – change the world. Level 5 autonomy is a great challenge, but the good news is that even if level 5 autonomy doesn’t happen ever or a long time cars will get a lot safer and convenient.
What challenges need to be overcome in handling data for the cars of tomorrow, and how can Intrepid help?
With our data loggers we make collecting data almost too easy. The amount of data is tremendous. One problem we already solved is using the cloud to store data and then a web app called DataSpy to search, analyse, and view the data. This avoids the need to move around big data sets (Floppy Net). Another challenge will be deciding what data to keep and what to throw away. Finally, I think AI training data that requires the cameras to be in the same spot (calibrated) has to be solved to make reuse of data collected.
What can we expect to see in the automotive Ethernet space in the next 5 years?
The exciting future is a IP everywhere strategy. This means replacing lower speed CAN links with bus topology Ethernet (10BaseT1S). If they can do this everything on the vehicle can run IP stacks allowing simple network architectures and reuse of internet protocols including things like security.
Multi-gigabit links are coming to absorb a lot of the bandwidth needed by high resolution sensors like LiDAR and cameras. These will compete with SerDes technologies like GMSL and FPDLink3 because of their relative simplicity. A lot of specific hardware will be needed to handle mult-gigabit because a CPU is not useful at these speeds.
How do you feel the industry can help resolve the shortage in training and recruitment?
The is no shortage of interest in this field but a shortage of qualified candidates. A great way to increase qualified candidates is accessibility to the technology. Almost no company is talking about what they are doing because it’s very competitive right now. Udacity is doing some great things. Dave Robins, Intrepid’s CEO, is running a meetup in Detroit. Student competitions are great as well. Comma.ai’s open source approach to self driving cars is really interesting.
How does your organisation fit into the testing and validation ecosystem, and how do you stand out?
Intrepid’s role is to build professional productivity tools. Most working on autonomous programs are using a lot of off-the-shelf consumer PCs in their back seat or trunk. A lot of this stuff needs a lot of power, doesn’t handle temperature well, doesn’t power down automatically or very fast, or just is plain too big or expensive for a fleet of hundreds of cars. Intrepid is solving all these issues with our RADGigalog.
What will attendees at Self Driving Track Days learn about when they meet you?
Intrepid is already a big supplier in the UK, they will learn that Intrepid also has big plans for supporting autonomous vehicle development.
What are you hoping to achieve as a business this year?
We are working close with a number of autonomous groups. We are hoping to be an indispensable part of their group. This means that Intrepid makes things go faster for them and its a great value to work with us. We are developing some awesome other products for Autonomy that we will be announcing at CES in January 2019.
Is there anything you’d like to achieve personally this year?
My daughter and I are learning to play the Ukulele together. Its really fun. Technically I would like to learn more about neural networks.
We caught up with the Hexagon team ahead of the Self Driving Track Day series on 10 July. Hexagon is sponsoring the event, they told us all about their thoughts on how the industry can help resolve the shortage in training and recruitment, how they are staying ahead of the competition and the limits to GPS/GNSS performance in today’s most advanced technologies.
Tell us about NovAtel’s relationship with Hexagon and what advantages that has brought to the business.
NovAtel is a part of Hexagon’s Positioning Intelligence division. Hexagon Positioning Intelligence leverages technology and products from its brands NovAtel and Veripos to deliver end-to-end assured positioning solutions. Being a part of the Hexagon family has allowed us to work closer with other divisions within Hexagon to pioneer solutions for emerging markets, specifically autonomous automotive applications.
What advances have taken place in positioning technology in the last 5 years?
Precise point positioning (PPP) is a positioning technique that uses GNSS satellite clock and orbit corrections to model and remove GNSS errors, resulting in a decimetre-level or better position solution. This technique is attractive in many markets because it is globally available and does not require additional hardware or infrastructure for the user. Although PPP has many advantages, the major technical challenge is the time it takes for the solution accuracy to converge. There has been significant investment in both industry and academia to resolve the technical limitations of convergence time with PPP solutions to the point where highly accurate, instantaneous PPP positioning is a reality today. Hexagon PI has launched TerraStar X technology to address the need for instantaneous convergence in high accuracy PPP solutions. When combined with automotive-grade GNSS receivers available through Hexagon Positioning Intelligence, this technology allows automotive customers to evaluate positioning performance in real-time using data delivered over the cellular network or the L-band frequency.
Sensor Fusion is another topic that is becoming vital to the advancement of positioning technologies. There are many cases where GNSS alone cannot provide an accurate positioning solution so other sensor (LiDAR, cameras, inertial measurement units, etc.) are being used to overcome the limitations of GNSS. Sensor fusion is the concept where these sensors are working together on a platform to contribute to the overall solution. Hexagon PI is already integrating IMUs into our positioning products to deliver GNSS+INS solutions through our SPAN product line. Advancements in the sensors themselves have allowed them to become more practical to integrate, which has led to significant improvements to the availability of a GNSS solution.
GNSS chipset manufacturers are producing automotive-grade multi-frequency GNSS chipsets in anticipation of autonomous driving applications. Dual frequency GNSS with safety and integrity is a critical component to achieving the accuracy and reliability required for autonomy. Hexagon PI is accelerating development in this area with our positioning solutions. Using our positioning engine and combining the GNSS measurements from these chipsets with Inertial Measurement Unit (IMU) data and PPP correction services deliver centimetre-level PPP positioning solutions in real-time.
The availability of GNSS satellites has drastically increased over the last five years to become a truly global system. In addition to GPS, GLONASS, and BeiDou, Galileo became fully operational adding another 18 satellites for navigation. When a receiver utilizes signals from a variety of constellations, redundancy is built into the solution. More satellites mean more signal availability, which allows for reduced signal acquisition time and can reduce the impact of obstructions on the position solution.
How do you feel the industry can help resolve the shortage in training and recruitment?
Hexagon PI supports many university programs related to encouraging youth in their pursuit of GNSS knowledge. We also have a very successful intern program with an 85% post graduate rehire rate. Outside of these official programs we also connect with youth on a regular basis via school visits and social media to promote the knowledge of the industry.
Regarding intermediate to senior level responsibilities we encourage knowledge share within our company through a robust set of programs that encourage creativity and innovation specific to geomatics; such as an internal mentoring program, lunch and learn opportunities and our annual Innovation Week where everyone from the company can take time away from their regular responsibilities to work on innovative ideas that they have.
As Hexagon PI is a global organization with offices all over the world, recruiting is focused on both local and international candidates. Flexible and remote arrangements are common and encouraged to maintain our international presence. Building on flexible immigration programs within Canada, we have been able to attract senior level experienced people who can continue to build our internal knowledge share and elevate the skills of our existing employees.
Does positioning technology augment the vision system in an autonomous vehicle, or the other way around?
The simple answer is both. From our perspective, as a GNSS company, we rely primarily on positioning technology and use external sensors to augment the position solution. Vision systems and GNSS based position solutions are complementary technologies in that when vision systems fail (i.e. weather, poor visibility) the GNSS solution can provide an absolute position, which is crucial to maintaining lane-level accuracy of the autonomous vehicle.
What are the limits to GPS/GNSS performance in today’s most advanced technologies?
One of the major limitations of GNSS is that it does not work everywhere. GNSS positioning relies on the availability of satellite signals and performance is highly dependent on environmental factors. Buildings and trees obstruct GNSS signals, while tunnels and overpasses can completely block the signal. Most GNSS receivers can manage challenging environments through a combination of multipath mitigation algorithms, dead reckoning, and using data from inertial measurement units.
How is your company staying ahead of your competition?
Our range of products and knowledge in GNSS, specifically in safety critical applications, is what differentiates us from the competition. Hexagon PI has products that are uniquely positioned to deliver high precision GNSS solutions through every stage of development in autonomy. Our SPAN products, industry leading GNSS+INS solutions used as truth systems to enable automotive fleet integration, are the foundation for our sensor fusion algorithms. We recently announced TerraStar X, our technology platform to eliminate the convergence time of high accuracy PPP solutions. We have also worked with STMicroelectronics integrating our positioning engine and correction services on the world’s first automotive-grade multi-frequency GNSS chipsets. Hexagon PI continues to innovate to provide assured positioning anywhere.
Where do ADAS and autonomous vehicles sit in terms of importance among the wide range of applications for your technology?
ADAS and the progression toward autonomous vehicles sit very high in terms of importance for GNSS applications. An absolute position solution that is safe and reliable will be an integral part of an autonomous automotive application. There are still many challenges with ADAS that an accurate position solution can help to solve, especially as we transition to full autonomy.
What are you hoping to achieve as a business this year?
We will continue supporting autonomous driving programs world-wide with our industry-leading SPAN GNSS+INS positioning systems, and to make steady progress towards delivering a functionally safe, ISO-26262 ASIL-B qualified mass production automotive positioning solution based on our software positioning engine and TerraStar-X PPP services in the years ahead.
What would you tell a prospective employee about your company?
At Hexagon PI, we know that the success of our business is a direct result of our highly motivated and collaborative staff. We value our people as much as we value our business. We pride ourselves on providing a stimulating work experience and cultivating teams that encourage learning, so that you can hone your expertise and grow in your career.
We are not afraid to try new things, take calculated risks and find new opportunities. We value performance over procedure, setting measurable goals and working collaboratively to achieve the results we seek.
Some of the perks to working with Hexagon PI include: Expert teams, strong customer focus, flexible work hours, casual dress any day of the week, state-of-the-art work stations and employee led social and environmental committees. Not to mention comprehensive benefit packages, company paid professional development and market competitive salaries.
We caught up with Dr Lounis Chermak, Lecturer in Computer Vision and Autonomous Systems at Cranfield University. Dr Chermak is leading a workshop at Self Driving Track Days on 10 July on “From Robotics to Computer Vision: Self-Localisation in Autonomous Vehicles.” In this one-to-one interview we uncovered his involvement in designing a smart autonomous visual based navigation sensor, the most important technology he believes automakers need to get right and how he aims to “make the images talk.”
Which project has been the most interesting and rewarding one that you have worked on in relation to autonomy of systems for space exploration, autonomous platforms and human machine interaction?
Over the past few years, I had the opportunity to work on or supervise several interesting projects related to autonomy of systems. That being said, the one that I am the most attached to, is my PhD project where I had to design a smart autonomous visual based navigation sensor that adapts to different space robotic platforms while providing trajectory generation without pre-knowledge of the environment. This was challenging in many aspects since you need to take into account plenty of environmental and technical factors specific to the space context. Such as limited hardware, extreme illumination and temperature, etc. However, this was an exciting 3 year journey where I designed and developed everything from the algorithms, the software, selecting the sensors and implementing the hardware until trials and validation. This was an intense process but really enriching and rewarding experience.
On the road to autonomy, what is the most important technology that automakers need to get right?
It is difficult to point to a specific technology since autonomous systems are a multidisciplinary and bring along computer vision, robotics, control and guidance, sensor fusion, navigation and more recently machine/deep learning. The latter, has been quite disruptive by leveraging the former disciplines and enabling it to reach a level of performance that have not been achieved previously. This could not have happened without the emergence of affordable and efficient parallel computing hardware. That being said, even if we are advancing faster than ever, A.I. based solutions still need to reach a higher level of reliability. Most importantly, the main subject that needs attention right now is being able to sense and model any environment or at least its critical elements with the maximum of reliability, and this requires innovations in all the above cited disciplines.
You won the Selwyn Award by the Royal Society of Photography in 2017, could you explain more about the work that won you this Award?
The Royal Photography Society is one of the world’s oldest photographic societies. Since 1994, the Selwyn award is yearly awarding, to someone 35 years or under, who has conducted science based research connected with imaging. Rather than specific work it is the distinction related to the impact of my work had on the field of imaging on my early career.
I believe that when you get such awards, it is quite surprising and the first thing that comes in to your mind is: me? really? Actually, that was the case. Then you start looking back at all your achievements and try to find the rationale behind it. I then realised that I achieved quite a lot over short period of time whether on the academic side, or delivering on industrial contracts. In fact, my academic life is fast paced in contrary to what one can think about academia. There is always something happening and I always look to what needs to be done or what’s next rather than what was done. Obviously I feel proud and honored to receive this distinction but that it’s not an end. Indeed, I see it as you are on a good track – keep going.
You like to say, you love to “make the images talk”, can you elaborate on this and what your goals are?
“A picture is worth a thousand words“, for the human brain that is obvious, but not for a computer. Actually, the human can instantaneously interpret what is in a scene by identifying the objects, their functionality and states, the people, their actions, emotions, and even their intentions to a certain extent. On the other hand, for the computer an image equals hundreds thousands to millions of pixels. Each pixel is a number representing the captured intensity level of luminosity. Hence, an image is a sequence of numbers completely meaningless for the computer. My job as computer vision scientist is to make these numbers becoming meaningful for the computer by applying or creating algorithms that will use these numbers to realise some of the operations which are naturally done in our brains such as detection, recognition, labeling of objects and people, stereo vision, depth estimation etc. So in that sense I am making the images talk in order to reveal the information within, that will help me to achieve different objectives.
You have a great interest in education of youth, could you share more information on following the “Teach Your Own” methodology?
“Teach your Own” is the name of a book written by an American educator called John Holt. His main ideas are that children need to be provided with stimulating learning environment most importantly to teach them how to learn rather than what to learn. Indeed, learning driven by motivation and enjoyment is much more efficient in term of recall. Learning is also an ongoing process that should be continuous even when becoming adult. This is even more true in our time where skills need to be constantly updated to follow the fast paced course of technological advancements. The good news is that nowadays knowledge is everywhere and has never been as accessible as before. This is a trend that is meant to grow and offer people to learn at their own path all around the world various subjects especially with the emergence’s of MOOCs (Massive Open Online course). If John Holt’s ideas were disruptive in the 20th century, it seems that today a lot is in place to allow them to be implemented and offer an individualised, fun, and efficient experience of learning.
What are you most looking forward to about leading a session at the Self Driving Track Day next week?
I am extremely happy to talk about my field and share some of my experiences but learning is not a unilateral process it is rather a multilateral one. So I am looking forward to people’s feedback, I am also keen to hear about their experiences and motivations whatever their level of expertise. I think we are living in an exciting time where autonomous systems are really about to change society so I can only be pleased meeting other people sharing this enthusiasm.
From the driverless vehicle hype seemingly hitting a peak in 2017, this year began with the news that Waymo, a subsidiary of Alphabet (Google’s parent company) and ride-hailing giant Uber had settled their legal wrangle, with the former taking a sizable chunk of the latter.
On many fronts, this is great news – intellectual properties have been successfully protected, and two of the largest investors in trying to find solutions to the autonomous vehicle problem are now aligned.
Waymo’s rapidly developing test and development vehicle fleet is spreading across the US, and despite Uber’s own bruising encounter with the regulators in the UK, the company’s R&D division ‘Uber ATG’ has continued to invest and make strides towards creating robot drivers. The much publicised fatality involving one of Uber’s autonomous vehicles in March 2018 has sent a much needed shockwave across the industry, and will – despite the tragedy itself – undoubtedly serve to improve safety practices, virtual and closed-course testing.
At CES in January, the world’s press started to understand the scale and complexity of autonomous driving, being able to compare not just two, but more than a dozen companies demonstrating their technologies in one place – and get a taste of the differences between those that are starting on the journey, and those that are nearing the end, with viable products entering the marketplace in the not-too-distant future.
In the UK, the Government’s programme of funding groups of companies to embark upon research projects will ‘start to end’, with the last of half a dozen funding rounds opening for bids this year and dozens of the previously funded projects reaching a zenith in 2018, with public trials, demonstrations and public exhibits.
With more than 50 projects already match-funded by the taxpayer and various consortium members made up of academia, industry players, public sector and innovative new organisations and startups venturing in to this field, it’s reasonable to say that this joint investment by the public and private sectors has enabled the UK – if not to take up the global lead – at the very least, reduce the gulf behind countries such as Germany, Japan and the US.
There are various international ‘leader boards’ which rate country’s readiness for autonomous vehicles, all of which now agree the UK is Top 10 material… imagine where we were before £100s of millions of investment!
It’s important to note that government investment won’t ‘fall of a cliff’, because there will continue to be opportunities for funding support through tax incentives for R&D (through Innovate UK), promotional support to international trade shows (through the Tradeshow Access Programme), networking events for numerous specialisms run by the Knowledge Transfer Network and the various industry Catapults).
For consumers, however, none of that matters – other than perhaps a smattering of national pride. For you and I, as buyers of cars, what do driverless cars actually mean? Will we go out and buy one?
The story of driverless cars, first and foremost, is about safety.
Don’t get hung up on a computer taking away your pleasure in driving… merely replacing the driving that is not enjoyable, and perhaps help you avoid accidents when driving is difficult, in bad weather, when you’re tired, near a school. No driver, anywhere, could begrudge that, or want to put others in harm’s way.
Improved road safety alone can easily repay these investments, just for the UK… even discarding the commercial opportunities in developing and exporting national expertise. Let’s take a number of annual road fatalities in the UK at 1700 people. DfT figures tell us that the cost to the taxpayer, for each fatal accident, is about £1.8m, so multiplied that up, and that’s £3b every year, lost.
The real figures (both of fatalities, and of cost to the taxpayer of each one) are higher… but indicatively, you can see the return on investment into CAV technologies is clear. Nobody is talking about the 4000 fatalities caused by human drivers on the same day globally, or the multi-billion £GBP negative impact of that.
Aside from that, the people that write, enforce and are governed by various laws covering vehicles (actually more than 100 statutes going back hundreds of years) also need to understand the new technologies, and insurance companies need to understand where liability might end up in the future. Other businesses, such as those that employ drivers on long or short distance deliveries, need to understand how their business or employment models might change, and the education system needs to adapt too. It’s those ideas which will form the foundation of a new series of web interviews we’re producing on AutoSens TV.
We have been playing around with autonomous vehicles, and helping to teach people about them, for a couple of years now, but 2018 will provide the largest platform to date to talk about autonomous vehicles to the Great British public, as the London Motor Show will have a new feature, a dedicated Autonomous Vehicle Zone.
For many of the 60,000 expected visitors, it will be their first encounter with autonomous vehicles, as they have mostly been confined to test tracks, workshops and secret test facilities.
But at a motor show at ExCel, in London? CES is the Consumer Electronics Show (with cars), the Detroit Motor Show now has a technology showcase… the worlds of technology and automotive have combined.
When you consider that the Goodwood Festival of Speed’s most popular stand was Tesla, the time is right for the UK’s biggest car events to embrace autonomy and ‘close the loop’, showing that the UK still has ambition to progress on the international stage.
Cars are connected and more automated than they’ve been before, and I’m excited to see how the public reacts to the next generation of technologies that will be seen on our roads.
If you can’t wait until May then there are lots of free newsletters you can sign up to, for example by Robohub, MCAV or Innovative Mobility Research – and if you really want to get your hands dirty, there’s always Formula Pi – scale model driverless cars that you can code and race your friends and even compete against other people remotely – or Self Driving Track Days, where engineers gather to share their technical insights to help accelerate the spread of knowledge to new companies across the supply chain.
In 2018, there’s really no excuse not to get out and learn about this new wave of exciting new technology, because it’s well on its way.
We gave away a small handful of prize-draw tickets to our upcoming Self Driving Track Days event, taking place on July 10th 2018 in Milton Keynes, so we wanted to find out a little more about two of our winners, Anna Relton and Dr Tim Biggs…
Tell us about yourself – where are you from and what do you do?
AR: I’m from Cambridgeshire, and I’m a student at Loughborough University. Although Loughborough Uni is sport/engineering central, I study psychology, and I’m currently working for a year on placement and work part time for Vectare, a transport consultancy, as their Marketing Strategy Manager.
Vectare provides innovative transport solutions, developing new technologies with our in-house team and using data analysis to advise consultancy clients. One speciality is coordinating home-to-school transport solutions for Independent Schools, by redesigning their school bus networks to maximise profit and efficiency, but also providing online booking systems, real-time vehicle tracking, and postcode search functions to help parents find their local bus stop and route. Furthermore, we have been able to provide similar services for municipal bus companies from major cities too.
TB: I am a physicist working for AVX Electronics Technology Ltd, which is part of the AVX group of companies. We primarily develop, design and sell automotive sensors under the AB brand, in a range of global markets to Tier 1s and OEMs.
Our product range includes position, temperature, pressure, liquid level, liquid quality, speed, electric motor encoders, pedals and ride height monitoring across a range of technologies. My role involves design and development of new sensor technologies and systems, maintaining up to date knowledge of new technologies and trends, and sharing knowledge across our group of companies.
Why do you want to learn more about the technology in this field?
AR: Driverless technology is going to revolutionise transport in general, and but also change the game for transport companies such as ours when planning for future investments and expansion. Vectare will eventually have a wider fleet of vehicles, and some may well end up being driverless to maximise efficiency, customer experience, and minimise overhead costs.
It’s always exciting to see what other companies are doing that may integrate with our work. It’s possible that we would want to invest in hybrid solutions or simple technologies that complement our existing services in the future, and so it’s events like Self Driving Track Days that allow us to scope out what solutions and projects are out there.
We are in a really fortunate position to have links with Universities such as Loughborough (renowned for engineering), Exeter and Aston, and have been able to hand-pick the best of the best for different projects and teams we have created. It’s pretty exciting to us that we could enable the next generation of engineers and computer scientists to get involved in transport as a career path and work on new technologies that will revolutionise public transport as we know it.
TB: I want to know how to leverage our current portfolio into the emerging autonomous market, what areas we should aim to be working in the future, and understand more about the current sensing problems.
What sort of companies and professionals are you hoping to meet?
AR: Really anyone we can collaborate with, either now or in the future! This field is something we are keen to investigate and experience first hand. We have various partners that will be so excited that we are looking into and experiencing driverless technology. We’re also hoping to come across companies and solutions that we haven’t heard about before, particularly new innovations, as these are the products and systems that will help us to think outside the box and push the boundaries in our own work.
TB: I am interested in meeting makers of autonomous systems/vehicles to learn about their challenges, as well as academics to learn about future developments.
Turn left and mow down a group of children, or turn right and hit a tree, killing the vehicle’s occupants.
The modern version of the trolley problem, created as a thought experiment, has often been applied to autonomous vehicles as a warning on the complexity of decisions they will supposedly have to make.
In a nut-shell, this is a daft problem which provides a philosophical approach to an ethical dilemma without taking into consideration very important and fairly simple facts.
Cars are already engineered to protect their occupants far better than they are engineered to protect pedestrians or other road users.The safest car on the road (the Volvo XC90), measured by Euro NCAP, has a 98% safety rating for occupants and a frankly appalling ~75% rating for protecting pedestrians. In fact, not one car goes above 85% for pedestrians.
Given a complex situation, a driverless car will just come safely to a stop. Waymo testers are well known for their ingenuity and creativity in trying to defeat their own cars’ ability to perceive their environments, and when faced with the ‘impossible to plan for’… In one case coming across a crowd of people dressed as frogs hopping across a road, Waymo had programmed the vehicle to just gently brake until the route was clear.
Nobody will ever get into a vehicle that favours the safety of someone else over you. Credit to writer and commentator Alex Roy for that, but it’s true… unless you’re getting into a vehicle with the express purpose of never getting off, like the Suicide Roller Coaster. Rather than considering a vehicle that’s safe for everyone, we are not generous enough to think about buying a vehicle that’s not safe for us and only safe for people around us. After all, we want to survive the journey to McDonald’s.
Of course, the nature of psychologists is to disagree, and the nature of philosophers is to question the meaning of everything. Am I right, wrong, is that a question or a lump of cheese? What does it all mean?
Driverless cars will blind us all with laser beams!
Wrong, wrong and wrong! Actually, wrong by a factor of 2 million, and I’ll explain why.
For this, we can fall back on technology facts and the laws of physics, which thankfully are not subject to votes, gerrymandering, or sinister algorithmic fake-news bots controlled by scary government forces.
Any device using a laser must be allocated to Classes, with 1 at the bottom and 4 at the top. Lasers you find in CD players and Laser Copiers… and Lidar units, are all Class 1. That’s the power or frequency at which there is ‘no possibility of eye damage’, i.e. there have never been any recorded cases of damage, even under extended exposure.
But that doesn’t tell us why.. or indeed what could happen if another Class of laser were used.
To be effective, a Lidar unit (Light Distance and Ranging) emits a tiny bit of energy on the infrared part of the electromagnetic spectrum. This energy is focused into a tight beam (i.e. turned into a laser) and then steered around a scene, by a mirror, a motor, or by a tiny device which combines those two things, called a MEMS. The energy, in the form of photons, travels out from the Lidar, hits and object and then some of those photons bounce back. The unit (and associated processing and software) then measure the timing and location of those bounces, and similar to your eyes, builds a rough 3D picture.
While Lidar is a fairly mature technology, having been theorised first in the 1930s and used in the Moon landings in the 1960s, it’s going through a few big changes. Arguably the two most important being the idea to make them move from a single mount (most commonly spinning lidars, as seen regularly on autonomous R&D vehicles) and the second, happening right now, the move away from this and towards ‘solid state lidars’, where a wide field of view can be taken in by multiple smaller, cheaper sensors, rather than several big, complicated expensive ones.
Either way, the maths we do is roughly the same, so we’ll use the example of the well known ‘Velodyne Puck’ which is widely used, and Velodyne has usefully published detailed specifications for.
Each sensor has between 16 and 128 lasers, which rapidly rotate at approximately 10 hertz, or 10 full revolutions per second, creating about 30,000 beam points.
Each laser pulses at a wavelength of 905nm (that’s just below 1/1000th of a millimetre) with power of 1 or 2 mW. That’s roughly the same energy needed to lift up a single grain of rice.
For comparison purposes, this is about 1/5,000th or 0.02% of the power output in your standard 10-watt LED headlamp bulb on a low-beam setting.
Any single laser beam would sweep across an inadvertently glancing eye in approximately 1 millisecond. And since each individual laser is mounted in a different orientation and angle, multiple lasers cannot strike the eye.
1000th of a second… and a very small amount of power.
There alone is a factor of 1 million-to-1 in favour of “It’s going to be fine”.
Going up a notch
Let’s say we swap all this out for a Class II laser, that’s up to 1mW (1000th of a Watt) and using Visible light. Even if we did that, damage would only start to occur after 1000 seconds of continuous exposure to a single spot within the eye. That is sitting very still… still enough for a Victorian photograph – nearly 17 minutes!
Ramp it up another notch to Class IIIa lasers and up to 5mW of power and finally we start to see some medical evidence of damage to eyes if exposed to certain wavelengths for several continuous seconds..
Up another notch to Class IIIb and yes, damage is probably going to occur because that includes 5mW up to 500mW – or half a Watt of power, and Class IV is where we start to burn holes through things and shine beams on other planets.
The length of a wave is directly proportional to how much you want the person to leave
From your school years, you may recall that if you decrease wavelength, then frequency of the wave increases proportionally. The higher the frequency, the shorter the wavelength, and vice versa.
What most people do not know, however, is that the energy level also changes.
That means that visible light, for example, has more energy in its photons than the longer wavelength, higher frequency energy in the infrared band.
Let’s take the example above of the Velodyne Pucks, which operate at a wavelength of 905 nanometres, compared with visible light, which ranges from 390 to 770 nanometres (or nm) or roughly one third to three quarters of one thousandth of one millimetre. That’s even smaller!
Because the ratio between wavelength and energy is exactly the same, we can just do the sum with the wavelengths to give us the correct ratio of how the energy level would also change. If you have more cats (wavelength), you’ll also need more cat-food (energy).
905 divided by the range of visible light 390 to 770, gives us another multiplier … from 2.3 to 1.7.
That means that in addition to our time factor error of 1 million, we have another factor, i.e. of the beam wavelength difference conveying a different amount of energy. So, at visible light wavelengths, there’s actually about twice as much energy travelling down the laser beam as is found in this infrared unit. Since you’re still persevering, let’s round it off at 2.
Finally, we can multiply our energy factor by the time factor… and we get to 2 million!
You’d need to be within a few metres of 2 million lidar units, and even at the rate of growth we’ve seen in the lidar market recently, even that would be a challenge. Before we conclude, the human body is also self-healing, so if we were to have this weird miracle occur, it would only take a few days for the eye to fully recover.
In summary, no, we’re not all about to go blind from Self Driving Cars.
Cars will be easy to hack
Certainly not helped by the apocalyptical scenes of driverless cars being taken over to hinder our heros’ progress in Hollywood movies, the much publicised ‘Jeep Hack’ – an attack on a Jeep Cherokee which took control of some functions in 2016 – actually took 6 months to plan and implement, time which included a significant amount of direct access to the vehicle in question.
As was succinctly pointed out to me by a senior engineering leader, ‘Engineers don’t make things to fail’. They are manufacturers using scientific method. If there’s evidence to say that the existing approach does not work, the approach needs to be modified.
It might be distasteful to consider the following, but I ask you to spare a moment as we venture into the dark soul of a nefarious criminal, assassin or scruffy-haired ne’er-do-well.
“Yes Mr Big, you want Bob Jones to be eliminated, I understand. Well, of all the methods I would choose, I would definitely use the most fiendishly complex and time consuming, and definitely not use a blunt object, readily available firearm or knife, or just set a fire in their basement”
Evidently many years watching Criminal Minds have served me well.
Anyway, I digress… the problem is that, given these are technology systems, the pay off needs to be technologically motivated. Sure, there are weaknesses in any system designed to communicate data from one place to another, and with a vehicle designed with sensors, many companies making the hardware and software therein are increasingly paying attention to how those processes could be hijacked, but add in to the mix multi-sensor validation, and the problem moves from edge case to almost impossible.
Let’s say you can fool a vehicle’s vision system into misreading a road sign (see the genuine example here of one algorithm thinking the cat is actually guacamole – the image has been modified specifically to defeat the computer vision system).
The same driverless system also has other data sources: lidar and radar to detect traffic density and enact safe operating distances, mapping data including junction configurations, and in the future, wireless connectivity to traffic management systems.
The future of ‘over the air updates’, now a regular feature in new cars, from an ever increasing number of manufacturers, coupled with the ability to alter how the processors around the vehicle actually behave (so called ‘Field Programmable Gate Arrays, or FPGAs), mean that any alterations to the system can be fixed easily, any software amendments authorised against a community of other vehicles, any unathorised attack detected.
That means that, much like your smartphone or friendly neighbourhood laptop or tablet, security updates will be downloaded without your knowledge with validations carried out against remote encrypted checking files.
It’s not a great stretch of the imagination to consider that blockchain-like technology could be used to validate updates against the server at your local dealer or even the other cars in your street.
Hacking cars is just so fantastically complex, and so utterly pointless in comparison to simpler means to control or attack, and so easily defeated, that the realms of practicality simply don’t extend to include it as a worthwhile exercise for your unfriendly neighbourhood gangster or your international megolomaniac.
We’ll all lose our jobs…
Do you work as a driver?
Autonomous cars will arrive with a whimper. As I have written before, the nature of physically manufacturing and shipping a vehicle to a buyer is a complex task but multiply that task across the population of the world and you have a revolution that will take 20 years to get half way and perhaps 30-40 years to conclude.
In the context of technological progress, that’s not particularly fast, and thankfully increases the visibility of this issue across the education and skills supply chain (i.e. schools and education) as well as providing plenty of warning for those who are professionally employed as vehicle operators, to the imminent change of vehicles ‘going driverless’.
Few of the jobs I have enjoyed in the past 15 years existed when I was at school, largely because they have revolved around technologies invented after I left. Does that mean that I won’t be able to survive the technologies that have not yet been invented? No, of course no, it just means I shall adapt.
Despite some indications to the contrary, notably in politics, humans are actually quite good at learning and adapting to their environment, i.e. not just surviving, but thriving – despite challenges.
Higher-skilled drivers, and those that offer additional services, will have a place for a few more years even after driverless vehicles arrive, as people with luggage or special access requirements may still need help dismounting vehicles, at least until the cost of replacing them drops to a point where technology can provide a better service at lower cost.
That, you see, becomes the pivot point. Not just of the direct cost of technology development, but the indirect costs of providing a robust interface to the real world for users of the system, a driverless Uber that can automatically deploy a ramp or help me stand up, leaving my luggage at the entrance to the hotel lobby.
How many grocery delivery drivers will lose their jobs if the current convenience of ‘mouse click to kitchen delivery’ turns in to ‘mouse click to tedious slog from the road side driverless van’?
Consumers would be worse off, so drivers are safe – for now.
The technologies which enable autonomy are all advancing apace, between shrinking processors, sharper sensors and smarter software, but the shape and nature of economics will not change.
Customer service and convenience will dominate the evolution, reducing cost and increasing business profitability will pay for it, and neither can be compromised for the sake of replacing humans with robots.
To that end, remember that technology does change, but human nature does not.
It is in our habit to live unwanted change through the five stages of grief – denial, anger, bargaining, depression and acceptance – when it’s easier just to look at the cold hard facts.
Yes, things will change, yes, life will be different, yes, jobs will be lost… but actually, things won’t change all that suddenly, jobs will actually change and society will adapt and overall, life will go on just as it always has, surviving and thriving despite the seismic sociological shifts that new technologies trigger.
Please consume responsibly! This article is meant as a call to arms and lightweight introduction to lots of different ideas. If you want to learn more, come to an event and talk to the real experts – but all the same, we hope you enjoyed it and remember kids, check your facts! – Alex Lawrence-Berkeley
We’ve been to a few events recently, including two that particularly stood out that were organised by the brilliant team in a small and occasionally overlooked organisation called the Knowledge Transfer Network, a unified body (from a previously multi-disciplined 15 organisations) tasked with bringing businesses together at events to share information, learn about new technologies, network and find out about Innovate UK funding calls.
These two events, Quantum Technology in Transport, and Embedded AI, showed not just the admirable variety of technology developers in the UK, from small scale to multi-national, but also highlighted the pivotal role held by funding organisations, notably Innovate UK, to act as a central facilitator and aggravator of investment into the development of new technologies over short, medium and long term.
But how do you define where technology can be used, or trickier still, when an immature technology might make an impact?
This is where a tried and tested measure called the ‘Technology Readiness Level’ can be applied. This multi-step process, which has slight variants in Space, Energy, Defense and other sectors in the UK, EU and US defines how mature the technology is, and thus whether it would benefit from further investment to help it mature into a viable product that is commercially successful, and eventually return that investment to the investor.
In the case of Innovate UK, the investor is also the taxpayer (that’s you and me), so the commercial success of a product is both an improved tax return to the government within the country, and the potential for an improved export volume from the country, improving the balance of economic advantage.
Technology Readiness Levels
basic principles observed
technology concept formulated
experimental proof of concept
technology validated in lab
technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies)
technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies)
system prototype demonstration in operational environment
system complete and qualified
actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies; or in space)
Innovate UK operates mainly in the areas of TRL 3 to 7, where there’s the greatest difficulty, taking on the mantle from the national research councils – which fund lower levels of research, and the commercial sector (also known as the glory seekers!) taking on the significantly reduced risk of a close-to-perfected product.
One step further away from Innovate UK, lies in the strategy it is required to fulfil, often set a few years beforehand. So the idea of CAVs (Connected and Autonomous Vehicles) being identified as of real importance in the early 2010s, helped to focus attention on not just one but many different areas that needed to be looked at – not just legislative (road law in the UK is very complex) but also commercial, educational, industrial and societal.
Two of the larger government departments, BEIS and DfT, had a ‘special cuddle’ and out shot CCAV, the Centre for Connected and Autonomous Vehicles), sibling to the Office for Low Emission Vehicles, to define a more detailed strategy and make some intelligent investments to start resolving these problem areas.
The large research projects, of which there have been more than 50, have investigated many of these. Since 2014, there have now been three CAV funding rounds, and so far, one separate CAV testbed round. Aligned to those, there have been funding calls for Emerging and Enabling Technologies, AR/VR plus the open innovation calls. In short, a lot of money has been spent to stimulate further private investment (almost all of it has been match funded by industry 50/50).
There are three more relevant funding rounds this year, including another CAV testbed round and a fourth (and probably final) CAV research round – so there’s still plenty of room for more innovation and research for bold new ideas if you want to find out more and start working in the sector.
We have lots of articles, videos and will always try to point you in the right direction, but there’s little better than getting yourself to a real event where engineers are on-hand to give you genuinely exciting and hands-on experience.
From November to February, we have invited attendees to enter a prize draw at events that we’ve attended.
And, we have now drawn to find some lucky winners, all of whom get free entry to the event, which will include a full day of workshops, driverless technology demonstrations and of course refreshments throughout the day – taking place at Daytona Milton Keynes on July 10th 2018.
At four events in the past four months, the winners are: Guilhermina Torrao, a PhD researcher at the University of Bradford; Tim Biggs, Sensor Technologist at AVX Electronics; Pedro Machado, a Design Engineer at Sundance Multiprocessor, and; Anna Relton, who found us at the Transport for London Museum’s event, “Transport and the Future”.
We’ll be talking to all four lucky winners in the next few weeks to find out more about their own interest, experience, and ask why they are interested in autonomous vehicle technologies.
Happy New Year and after a very exciting 2017, we’re back for our bigger event format for 2018. With a change of venue, to the focal point of the UK’s autonomous industry, Milton Keynes, (just around the corner from the fantastic Transport System Catapult), we’ll be opening the doors to around 150 people and a wide variety of technology companies, demonstration vehicles and guest speakers – all gathering to support your growth and development in this exciting field.
Tickets explained… The standard ticket includes your choice of workshops, autonomous vehicle demonstrations and access to the exhibition, along with refreshments all day and lunch. The ‘networking’ ticket includes access to the exclusive evening session, where you can relax, continue conversations, and enjoy complimentary go-karting and BBQ dinner.
YoE is a government backed programme of events spanning across the UK, encouraging people of all ages (but especially younger people) to engage with science, technology, engineering and maths – and consider working in fields using these topics.
As an event which celebrates and connects engineers, we are proud to support this initiative and are pleased to encourage people from all backgrounds to consider engineering, in all forms, as a potential career.
There is no single industry which does not need engineers, and the ecosystem of technologies used in autonomous vehicles is as diverse as it is exciting, with ideas and applications of STEM skills included in mechanical, electrical, digital and even civil engineering. Outside our regular topic of autonomous road vehicles, these technologies are used in the air, under water, even in space – as well as in logistics, industrial manufacturing processes in factories, and agriculture.
The engineers we see at our events come from and go towards many different industries and we are keen to fly the flag on their behalf and tell everyone that being an engineer can take you anywhere.