Lively discussion and in-depth learning in an informal setting

Sense Media took over yet another unique venue this week with their Self Driving Track Days event. Incorporating a programme of introductory and intermediate workshop content, with a technology exhibition and a self-driving car demonstration, attendees enjoyed some lively discussion and in-depth learning in an informal setting.

Sponsors Dataspeed, Hexagon Positioning Intelligence and Intrepid kicked off the introductory workshop session with presentations on how to build a self-driving car, GNSS, navigation and vehicle networking technology. This provided attendees with the background to the technology they would see in action later in the day in taking in some laps of the karting track in Dataspeed’s demo vehicle.

“The technical expertise was great, and riding in a driverless car first hand really made the event special” James Underwood, SFA

Alongside this introductory session, attendees were getting to grips with functional safety for autonomous vehicles, with Intel and robotics and localisation technologies with Lounis Chermak from Cranfield University.

Attendees also took plenty of time out from the workshops for networking and Event Director, Hayley Marsden, noted that “there was already a great buzz about the event by the first networking break, the workshop leaders were providing a lot of food for thought for attendees and these discussions continued outside of the seminar rooms into the networking area”

Following on from a lunchtime discussion session on driver monitoring with Loughborough University, interactivity was a strong focus in the afternoon sessions of the day, and Valentina Donzella from the Intelligent Vehicles Group at WMG had attendees standing up in small working groups and there was a lot of energy amongst these participants.

“Attending as a Mechanical Design Engineer, I wasn’t too sure how much I’d get out of the day, however the workshops and team activities got me thinking and learning about subjects I’d not previously experienced, such as sensors and GNSS” Tom Clark, The Seat Design Company

In the adjacent room, attendees were getting hands on with Deep Neural Networks in a web-based session led by Sander van Dijk of Parkopedia.

The third workshop leader of the afternoon, Madelina Cheah of Horiba Mira, both joined as a participant to the morning sessions and led a session on Automotive Cybersecurity and commented that “It was definitely a really informative and interesting day”.

To round off a day of learning about how to let cars do all the driving, attendees were able to put their own driving skills to the test, in a go-karting competition. With 16 karts on the track, and a strong competitive dynamic forming amongst participants, the two-legged competition had a clear winner, and our congratulations go out to Arnout Koelewijn of Xsens.

Watch a video from the Autonomous Vehicle Demonstrations taken by Daytona Karting

If we cannot find the solution, we will create it

After Self Driving Track Days took place we took the opportunity to catch up with Dataspeed, providers of the Autonomous Vehicle Demonstrations at the event. Dataspeed told us all about their 4 year growth journey, their collaboration with VSI Labs and what people can learn from experiencing a demo in one of their vehicles.

Dataspeed has grown significantly over the last 3-4 years, tell us a little about that journey and the drivers behind it.

Dataspeed provides its customers with safe, reliable, and cost-effective test platforms. Initially, Dataspeed started out as a mobile robotics firm and has since used that as a “stepping stone” to combine robotics with mobility. Dataspeed sees an Autonomous Vehicle (AV) as a robot, specifically a mobile robot, with tasks it must perform autonomously, foremost, to transport passengers safely.

As the automotive market and people started to quickly adopt autonomy, companies had a need to control the entire vehicle. For Dataspeed, it was combining a lot of technology from previous programs, which has evolved in to a great product – the Dataspeed ADAS (Advanced Driver Assistance Systems) Kit.

The ADAS Kit allows computer control of the throttle, brake, steering, and shifting to assist companies in the testing of sensors, software and other elements, enabling AV applications. Furthermore, to produce this safe and cost-effective solution, Dataspeed doesn’t employ additional mechanical actuators, but instead leverages the electromechanical capability of the vehicle. The Kit allows an engineer to assert control over the components, without worrying about additional steps, with a time-tested system in place. This allows clients to focus on testing different algorithms, from emergency braking to throttle and steering.

Overall, the drivers behind the significant growth is simple – focusing on how to provide researchers and developers with the most efficient, beneficial, and technically supportive, hardware and software tools they need to successfully execute their task/project.

What are the key elements of a self-driving vehicle?

The key elements of a self-driving vehicle are:

  • Detect / Recognize (hardware – sensors)
  • Analyze Data (software – algorithms)
  • Control (hardware – actuators)

Who are your customers, and what do you provide for them?

Our customers include automakers, OEM’s, technology providers, service providers such as ride sharing firms, academic institutions and funded start-up’s.

At Dataspeed, we put forth time as well as effort to determine a solution for AV research. Dataspeed is proud to offer highly engineered hardware and software tools to researchers and developers working on mobility and robotics solutions. Our ADAS Kit provides a unique and compatible research and development platform. Through the implementation of the Dataspeed ADAS Kit, our customers are able to save time and conduct much more efficient testing on what they are developing.

What’s the best approach when deciding which technology stack to build into a vehicle?

Our technology enables developers with ultimate control over the throttle, brake, steer, and shift-by-wire controller modules. It is always up to the engineer, regarding what their specific application is, and from there, we accelerate that testing for them.

What would you tell a prospective employee about your company?

At Dataspeed, we believe that if we cannot find a solution, we will certainly create it.

Tell us about your collaboration with VSI Labs.

Our partnership with VSI Labs is related to the testing and development of automated vehicle technologies. Through this partnership, VSI will be building out a full stack of hardware and software components on a Dataspeed enabled vehicle. The vehicle itself becomes a showcase for component makers wanting to see their componentry within the context of an automated vehicle.

How important is the UK for you, and do you have plans to grow in this region?

As of today, Dataspeed is supporting transportation as a service and looks to expand more in the European market, which is fast and smart. We know how strong the interest for adaptive safety features and similar automation is in the UK, and that is a region we definitely plan to develop a stronger presence in.

What can people learn from experiencing a demo in one of your vehicles?

When someone experiences a demonstration in one of our autonomous test vehicles, we want them to learn how the technology is being properly used. The purpose of the demonstration is to display the functionality of full by-wire control of the vehicle, without adding any actuators, in addition to there being very little modification to the vehicle. To demonstrate these capabilities, the by-wire controlled vehicle will follow precise trajectory paths using accurate, high data-rate output. The algorithm behind the path following system uses adaptive control concepts to regulate the throttle and brake inputs to react to hills, air resistance and other disturbances, and nonlinear state feedback control to generate steering inputs to minimize the lateral error from the programmed route.

Additionally, when demonstrating our autonomous vehicle, we want to promote that the vehicle will be following a programmed route using GPS, as well as, integration and data fusion of multiple LIDARs and RADAR sensors.

Is there anything you’d like to achieve this year?

Everything we are working on or have provided is a test-vehicle, but we aim to evolve as the fleet and passenger-based companies define the next phase of the industry. With over 350 Dataspeed equipped autonomous test vehicles on the road today, we do not plan on stopping any time soon. We are working on expanding our safe, reliable and cost-effective test platforms, in order to meet the needs of our growing customer base. All in all, we are ramping up the Dataspeed team and our capabilities to support these growing markets.

Dataspeed provided the autonomous vehicles at Self Driving Track days to run demonstrations around the venue track for attendees. Find out more here >>

There is no shortage of interest in the industry but a shortage of qualified candidates

We caught up with Intrepid ahead of the Self Driving Track Day series on 10 July. Intrepid are sponsoring the event, they told us all about their thoughts on why ADAS and autonomous vehicle developments represent such good opportunities and what can we expect to see in the automotive Ethernet space in the next 5 years.

How do you describe your company to a new connection?

Intrepid’s main role is providing professional development tools for OEMs and Suppliers in the automotive electronics development area. The professional tools make the engineers and technicians productive meaning that they do their job faster. So find and solve problems faster or verify everything works faster, etc. Intrepid has always been able to come up with great software, hardware and direct support that has made a big difference for our customers.

Why does ADAS and autonomous vehicle development represent such a good opportunity?

ADAS/Autonomy is as big of opportunity as when the Internet was created or the mobile smart phone. It will change life as we know it. This means there will be a huge amount of investment in the industry. This presents a great opportunity for companies to grow and people to learn and do amazing things – change the world. Level 5 autonomy is a great challenge, but the good news is that even if level 5 autonomy doesn’t happen ever or a long time cars will get a lot safer and convenient.

What challenges need to be overcome in handling data for the cars of tomorrow, and how can Intrepid help?

With our data loggers we make collecting data almost too easy. The amount of data is tremendous. One problem we already solved is using the cloud to store data and then a web app called DataSpy to search, analyse, and view the data. This avoids the need to move around big data sets (Floppy Net). Another challenge will be deciding what data to keep and what to throw away. Finally, I think AI training data that requires the cameras to be in the same spot (calibrated) has to be solved to make reuse of data collected.

What can we expect to see in the automotive Ethernet space in the next 5 years?

The exciting future is a IP everywhere strategy. This means replacing lower speed CAN links with bus topology Ethernet (10BaseT1S). If they can do this everything on the vehicle can run IP stacks allowing simple network architectures and reuse of internet protocols including things like security.

Multi-gigabit links are coming to absorb a lot of the bandwidth needed by high resolution sensors like LiDAR and cameras. These will compete with SerDes technologies like GMSL and FPDLink3 because of their relative simplicity. A lot of specific hardware will be needed to handle mult-gigabit because a CPU is not useful at these speeds.

How do you feel the industry can help resolve the shortage in training and recruitment?

The is no shortage of interest in this field but a shortage of qualified candidates. A great way to increase qualified candidates is accessibility to the technology. Almost no company is talking about what they are doing because it’s very competitive right now. Udacity is doing some great things. Dave Robins, Intrepid’s CEO, is running a meetup in Detroit. Student competitions are great as well.’s open source approach to self driving cars is really interesting.

How does your organisation fit into the testing and validation ecosystem, and how do you stand out?

Intrepid’s role is to build professional productivity tools. Most working on autonomous programs are using a lot of off-the-shelf consumer PCs in their back seat or trunk. A lot of this stuff needs a lot of power, doesn’t handle temperature well, doesn’t power down automatically or very fast, or just is plain too big or expensive for a fleet of hundreds of cars. Intrepid is solving all these issues with our RADGigalog.

What will attendees at Self Driving Track Days learn about when they meet you?

Intrepid is already a big supplier in the UK, they will learn that Intrepid also has big plans for supporting autonomous vehicle development.

What are you hoping to achieve as a business this year?

We are working close with a number of autonomous groups. We are hoping to be an indispensable part of their group. This means that Intrepid makes things go faster for them and its a great value to work with us. We are developing some awesome other products for Autonomy that we will be announcing at CES in January 2019.

Is there anything you’d like to achieve personally this year?

My daughter and I are learning to play the Ukulele together. Its really fun. Technically I would like to learn more about neural networks.

Come and meet Intrepid in Milton Keynes, UK on 10 July at Self Driving Track Days. Book your tickets here >>

We are not afraid to try new things, take calculated risks and find new opportunities

We caught up with the Hexagon team ahead of the Self Driving Track Day series on 10 July. Hexagon is sponsoring the event, they told us all about their thoughts on how the industry can help resolve the shortage in training and recruitment, how they are staying ahead of the competition and the limits to GPS/GNSS performance in today’s most advanced technologies.

Tell us about NovAtel’s relationship with Hexagon and what advantages that has brought to the business.

NovAtel is a part of Hexagon’s Positioning Intelligence division. Hexagon Positioning Intelligence leverages technology and products from its brands NovAtel and Veripos to deliver end-to-end assured positioning solutions. Being a part of the Hexagon family has allowed us to work closer with other divisions within Hexagon to pioneer solutions for emerging markets, specifically autonomous automotive applications.

What advances have taken place in positioning technology in the last 5 years?

Precise point positioning (PPP) is a positioning technique that uses GNSS satellite clock and orbit corrections to model and remove GNSS errors, resulting in a decimetre-level or better position solution. This technique is attractive in many markets because it is globally available and does not require additional hardware or infrastructure for the user.  Although PPP has many advantages, the major technical challenge is the time it takes for the solution accuracy to converge. There has been significant investment in both industry and academia to resolve the technical limitations of convergence time with PPP solutions to the point where highly accurate, instantaneous PPP positioning is a reality today. Hexagon PI has launched TerraStar X technology to address the need for instantaneous convergence in high accuracy PPP solutions. When combined with automotive-grade GNSS receivers available through Hexagon Positioning Intelligence, this technology allows automotive customers to evaluate positioning performance in real-time using data delivered over the cellular network or the L-band frequency.

Sensor Fusion is another topic that is becoming vital to the advancement of positioning technologies. There are many cases where GNSS alone cannot provide an accurate positioning solution so other sensor (LiDAR, cameras, inertial measurement units, etc.) are being used to overcome the limitations of GNSS. Sensor fusion is the concept where these sensors are working together on a platform to contribute to the overall solution. Hexagon PI is already integrating IMUs into our positioning products to deliver GNSS+INS solutions through our SPAN product line. Advancements in the sensors themselves have allowed them to become more practical to integrate, which has led to significant improvements to the availability of a GNSS solution.

GNSS chipset manufacturers are producing automotive-grade multi-frequency GNSS chipsets in anticipation of autonomous driving applications. Dual frequency GNSS with safety and integrity is a critical component to achieving the accuracy and reliability required for autonomy. Hexagon PI is accelerating development in this area with our positioning solutions. Using our positioning engine and combining the GNSS measurements from these chipsets with Inertial Measurement Unit (IMU) data and PPP correction services deliver centimetre-level PPP positioning solutions in real-time.

The availability of GNSS satellites has drastically increased over the last five years to become a truly global system. In addition to GPS, GLONASS, and BeiDou, Galileo became fully operational adding another 18 satellites for navigation. When a receiver utilizes signals from a variety of constellations, redundancy is built into the solution. More satellites mean more signal availability, which allows for reduced signal acquisition time and can reduce the impact of obstructions on the position solution.

How do you feel the industry can help resolve the shortage in training and recruitment?

Hexagon PI supports many university programs related to encouraging youth in their pursuit of GNSS knowledge. We also have a very successful intern program with an 85% post graduate rehire rate. Outside of these official programs we also connect with youth on a regular basis via school visits and social media to promote the knowledge of the industry.

Regarding intermediate to senior level responsibilities we encourage knowledge share within our company through a robust set of programs that encourage creativity and innovation specific to geomatics; such as an internal mentoring program, lunch and learn opportunities and our annual Innovation Week where everyone from the company can take time away from their regular responsibilities to work on innovative ideas that they have.

As Hexagon PI is a global organization with offices all over the world, recruiting is focused on both local and international candidates. Flexible and remote arrangements are common and encouraged to maintain our international presence. Building on flexible immigration programs within Canada, we have been able to attract senior level experienced people who can continue to build our internal knowledge share and elevate the skills of our existing employees.

Does positioning technology augment the vision system in an autonomous vehicle, or the other way around?

The simple answer is both. From our perspective, as a GNSS company, we rely primarily on positioning technology and use external sensors to augment the position solution. Vision systems and GNSS based position solutions are complementary technologies in that when vision systems fail (i.e. weather, poor visibility) the GNSS solution can provide an absolute position, which is crucial to maintaining lane-level accuracy of the autonomous vehicle.

What are the limits to GPS/GNSS performance in today’s most advanced technologies?

One of the major limitations of GNSS is that it does not work everywhere. GNSS positioning relies on the availability of satellite signals and performance is highly dependent on environmental factors. Buildings and trees obstruct GNSS signals, while tunnels and overpasses can completely block the signal. Most GNSS receivers can manage challenging environments through a combination of multipath mitigation algorithms, dead reckoning, and using data from inertial measurement units.

How is your company staying ahead of your competition?

Our range of products and knowledge in GNSS, specifically in safety critical applications, is what differentiates us from the competition. Hexagon PI has products that are uniquely positioned to deliver high precision GNSS solutions through every stage of development in autonomy. Our SPAN products, industry leading GNSS+INS solutions used as truth systems to enable automotive fleet integration, are the foundation for our sensor fusion algorithms.  We recently announced TerraStar X, our technology platform to eliminate the convergence time of high accuracy PPP solutions. We have also worked with STMicroelectronics integrating our positioning engine and correction services on the world’s first automotive-grade multi-frequency GNSS chipsets. Hexagon PI continues to innovate to provide assured positioning anywhere.

Where do ADAS and autonomous vehicles sit in terms of importance among the wide range of applications for your technology?

ADAS and the progression toward autonomous vehicles sit very high in terms of importance for GNSS applications. An absolute position solution that is safe and reliable will be an integral part of an autonomous automotive application. There are still many challenges with ADAS that an accurate position solution can help to solve, especially as we transition to full autonomy.

What are you hoping to achieve as a business this year?

We will continue supporting autonomous driving programs world-wide with our industry-leading SPAN GNSS+INS positioning systems, and to make steady progress towards delivering a functionally safe, ISO-26262 ASIL-B qualified mass production automotive positioning solution based on our software positioning engine and TerraStar-X PPP services in the years ahead.

What would you tell a prospective employee about your company?

At Hexagon PI, we know that the success of our business is a direct result of our highly motivated and collaborative staff. We value our people as much as we value our business. We pride ourselves on providing a stimulating work experience and cultivating teams that encourage learning, so that you can hone your expertise and grow in your career.

We are not afraid to try new things, take calculated risks and find new opportunities. We value performance over procedure, setting measurable goals and working collaboratively to achieve the results we seek.

Some of the perks to working with Hexagon PI include: Expert teams, strong customer focus, flexible work hours, casual dress any day of the week, state-of-the-art work stations and employee led social and environmental committees. Not to mention comprehensive benefit packages, company paid professional development and market competitive salaries.

Come and meet Hexagon in Milton Keynes, UK on 10 July at Self Driving Track Days. Book your tickets here >>

Autonomous systems are about to change society

We caught up with Dr Lounis Chermak, Lecturer in Computer Vision and Autonomous Systems at Cranfield University. Dr Chermak is leading a workshop at Self Driving Track Days on 10 July on “From Robotics to Computer Vision: Self-Localisation in Autonomous Vehicles.” In this one-to-one interview we uncovered his involvement in designing a smart autonomous visual based navigation sensor, the most important technology he believes automakers need to get right and how he aims to “make the images talk.”

Which project has been the most interesting and rewarding one that you have worked on in relation to autonomy of systems for space exploration, autonomous platforms and human machine interaction?

Over the past few years, I had the opportunity to work on or supervise several interesting projects related to autonomy of systems. That being said, the one that I am the most attached to, is my PhD project where I had to design a smart autonomous visual based navigation sensor that adapts to different space robotic platforms while providing trajectory generation without pre-knowledge of the environment. This was challenging in many aspects since you need to take into account plenty of environmental and technical factors specific to the space context. Such as limited hardware, extreme illumination and temperature, etc. However, this was an exciting 3 year journey where I designed and developed everything from the algorithms, the software, selecting the sensors and implementing the hardware until trials and validation. This was an intense process but really enriching and rewarding experience.

On the road to autonomy, what is the most important technology that automakers need to get right?

It is difficult to point to a specific technology since autonomous systems are a multidisciplinary and bring along computer vision, robotics, control and guidance, sensor fusion, navigation and more recently machine/deep learning. The latter, has been quite disruptive by leveraging the former disciplines and enabling it to reach a level of performance that have not been achieved previously. This could not have happened without the emergence of affordable and efficient parallel computing hardware. That being said, even if we are advancing faster than ever, A.I. based solutions still need to reach a higher level of reliability. Most importantly, the main subject that needs attention right now is being able to sense and model any environment or at least its critical elements with the maximum of reliability, and this requires innovations in all the above cited disciplines.

You won the Selwyn Award by the Royal Society of Photography in 2017, could you explain more about the work that won you this Award?

The Royal Photography Society is one of the world’s oldest photographic societies. Since 1994, the Selwyn award is yearly awarding, to someone 35 years or under, who has conducted science based research connected with imaging. Rather than specific work it is the distinction related to the impact of my work had on the field of imaging on my early career.

I believe that when you get such awards, it is quite surprising and the first thing that comes in to your mind is: me? really? Actually, that was the case. Then you start looking back at all your achievements and try to find the rationale behind it. I then realised that I achieved quite a lot over short period of time whether on the academic side, or delivering on industrial contracts. In fact, my academic life is fast paced in contrary to what one can think about academia. There is always something happening and I always look to what needs to be done or what’s next rather than what was done. Obviously I feel proud and honored to receive this distinction but that it’s not an end. Indeed, I see it as you are on a good track – keep going.

You like to say, you love to “make the images talk”, can you elaborate on this and what your goals are?

“A picture is worth a thousand words“, for the human brain that is obvious, but not for a computer. Actually, the human can instantaneously interpret what is in a scene by identifying the objects, their functionality and states, the people, their actions, emotions, and even their intentions to a certain extent.   On the other hand, for the computer an image equals hundreds thousands to millions of pixels. Each pixel is a number representing the captured intensity level of luminosity. Hence, an image is a sequence of numbers completely meaningless for the computer. My job as computer vision scientist is to make these numbers becoming meaningful for the computer by applying or creating algorithms that will use these numbers to realise some of the operations which are naturally done in our brains such as detection, recognition, labeling of objects and people, stereo vision, depth estimation etc. So in that sense I am making the images talk in order to reveal the information within, that will help me to achieve different objectives.

You have a great interest in education of youth, could you share more information on following the “Teach Your Own” methodology?

“Teach your Own” is the name of a book written by an American educator called John Holt. His main ideas are that children need to be provided with stimulating learning environment most importantly to teach them how to learn rather than what to learn. Indeed, learning driven by motivation and enjoyment is much more efficient in term of recall. Learning is also an ongoing process that should be continuous even when becoming adult. This is even more true in our time where skills need to be constantly updated to follow the fast paced course of technological advancements. The good news is that nowadays knowledge is everywhere and has never been as accessible as before. This is a trend that is meant to grow and offer people to learn at their own path all around the world various subjects especially with the emergence’s of MOOCs (Massive Open Online course). If John Holt’s ideas were disruptive in the 20th century, it seems that today a lot is in place to allow them to be implemented and offer an individualised, fun, and efficient experience of learning.

What are you most looking forward to about leading a session at the Self Driving Track Day next week?

I am extremely happy to talk about my field and share some of my experiences but learning is not a unilateral process it is rather a multilateral one. So I am looking forward to people’s feedback, I am also keen to hear about their experiences and motivations whatever their level of expertise. I think we are living in an exciting time where autonomous systems are really about to change society so I can only be pleased meeting other people sharing this enthusiasm.

Come along to Self Driving Track Days and attend Dr Chermaks’ workshop on “From Robotics to Computer Vision: Self-Localisation in Autonomous Vehicles. Book your tickets here >>

Goal! Lessons learnt from robots playing football

Sander van Dijk, Head of Research at Parkopedia. Delivering a workshop on 10 July on “Getting your Deep Neural Network into a car”

The next in our series of interviews with workshop leaders at Self Driving Track days introduces us to Sander van Dijk. Sander is Head of Research at Parkopedia, and will be delivering a workshop on 10 July on “Getting your Deep Neural Network into a car”

Can you tell us a bit more about your work as Head of Research at Parkopedia?

At Parkopedia our goal is to improve the world by delivering innovative parking solutions. This means capturing and providing any information that is useful to a driver looking for a place to park: locations, both off and on-street, opening hours, restrictions, prices, etc., but also, very importantly, the probability of you actually finding a space in any of those locations. The research team at Parkopedia develops advanced machine learning models to predict this probability accurately worldwide, using a wide range and large volume of input data. Furthermore, we build deep learning-based computer vision methods and other recognition techniques to help automate and ramp up our growth in coverage. Besides that I am part of the team at Parkopedia that works to enable autonomous valet parking.

You have also worked on making humanoid robots play football and are impressively three-time vice champion in the RoboCup world cup. Can you tell us more about how you got into this? Why do has this been such a successful past time for you? What can others do if they want to get involved?

While I was studying Artificial Intelligence at the University of Groningen in The Netherlands, a friend and fellow student mentioned some robotics competition coming up in Bremen, Germany, just over the border, and that we should enter as a team. This was the RoboCup world championship of 2006, and we entered the 3D simulation league, where robots play football autonomously in a physical simulation. We ended up being completely trounced, but we loved the experience and the mix of a friendly, scientific, and competitive community. The next year we participated again, in Atlanta, Georgia, but this time we came more prepared and managed to claim second prize. I have been competing since; I came to the UK in 2009 to start my PhD at the University of Hertfordshire, after meeting my then future supervisor at RoboCup, and became vice world champion again in the simulation league with my new team there, the UH Bold Hearts. A few years later we decided to turn it up a notch, and put together a team of real physical robots to join the Humanoid football league, and in 2014 took the second prize trophy in that league at the world cup in Brazil.

The main aim of the RoboCup is to tackle the very hard problem of having robots and AI operate successfully and reliably in a dynamic world, not just on a discrete and structured chess or go board. This is not only to be able to play football, but will help enable robots and AI in daily life, such as in care, rescue, and autonomous driving. I believe the reason of our success in RoboCup, and of the other amazing teams that kept us from the first place, is robustness. The robots always need to be able to do something, even in the case of failure or unexpected situations, and for the full length of two 10 minute halves. One guideline we follow for this is to only introduce new methods that degrade gracefully: if a new fancy team strategy fails because of broken communication, each robot should still be able to play on its own.

Anybody who would like to get involved in RoboCup can have a look on their website,, or reach out to our team the Bold Hearts on any of our social media accounts.

Did you learn anything in this hobby that you have applied to your work at Parkopedia?

In RoboCup you often are in a situation having to create/update/fix a complex intelligent system under pressure: a lot of magic happens in the 5 minute breaks between halves. In a dynamic company like Parkopedia it sometimes can feel similar, when combining something unconstrained like research with tight business deadlines and expectations. The aspect of robustness, and incremental, ‘agile’ research helps to keep things under control while making quick progress. The lessons learned from RoboCup will especially apply to our project to create an autonomously parking vehicle. Benefits however flow the other way as well, by introducing industry quality processes to our team mainly made up of students, based on our experience at Parkopedia.

Deep learning is all the hype now, not the least in the world of self-driving cars. Do you think this hype will continue? What do we need to watch out for?

Deep learning is here to stay for a while. It has revolutionised a range of research fields, cutting down the time frames in which even experts in those fields thought such achievements were possible. There are still plenty of fields and problems open to applications of deep learning, and the amount of research output on deep learning is massive. What is very special to this research is the open availability: because the field moves so fast, much of the research is made available openly on platforms such as This makes it possible for everybody to have access and contribute, not just for the few who are at an institution that can afford to get access through overpriced paywalls.

This is amazingly powerful, but also circumvents the checks of traditional science: although the system of peer review is not without faults, without it it is now very easy to put works out there that look scientific, but may be plagiarized, not possible to reproduce, or just simply bad research. Such works can set false expectations and add negatively to the hype.

Besides this there are still questions around deep learning that haven’t fully been answered: why does it really work so well (it is not uncommon that the original intuitive explanations of some of the smartest authors proved to be wrong later on)? Can we make useful claims about why a deep network makes a decision, especially if something goes wrong? Can we get it to learn useful things from just a small sample of data, like humans do?

In your Self Driving Track Days workshop you will have a look at the different shapes and forms of deep neural networks that are out there that you may want to use on a self-driving car, what is the biggest consideration in choosing the right network design?

The most important, as to any machine learning problem, is to choose a model that fits your problem. This involves seemingly simple decisions like choosing either a classification or a regression network. But a counting problem can for instance be modeled as either, so you have to think about which is more suitable to your use case. An important consideration in that sense is your cost function: what output is good and what output is bad for you? If counts close to the actual value can be good enough, you may choose regression. If any count except the actual value is as bad, you may choose classification. Besides that, it is useful to think about the format of the output to: do you really need the pixel-by-pixel mask of a Mask RCNN, or are bounding boxes sufficient? Especially with limited hardware and real time constraints, it is useful to keep things as simple as possible, and maybe even to reframe your problem to make it possible to run a much cheaper model instead.

What are you most looking forward to about leading this workshop?

I am looking forward to meeting all the people interested in the exciting area of self-driving cars, and deep learning specifically, and to have interesting discussions about the recent developments, opportunities and limitations. I hope the participants will come away with some useful knowledge and ideas for applying these techniques in practice, including myself as there is always more to learn in this fast paced field!

If you want to hear more from Sander, he will be join Self Driving Track Days to deliver a workshop on “Getting your Deep Neural Network into a car” – View the full agenda here >>

Meet Meena, Functional Safety Consultant at Intel

Meena Rajamani, Functional Safety Consultant at Intel Corporation Ltd
Meena Rajamani, Functional Safety Consultant at Intel Corporation Ltd, UK. Delivering workshop on “Functional safety considerations for autonomous vehicles.”

We recently caught up with Meena Rajamani, Functional Safety Consultant at Intel Corporation Ltd, UK who is delivering a workshop on “Functional safety considerations for autonomous vehicles” at Self Driving Track Days in Milton Keynes on 10 July 2018.

You have over 17 years’ experience in the automotive industry, in various parts of the world, and during this time have you noticed any differences in geographical approaches to automotive developments?

Each OEM reflects the region’s culture in its strategy, especially in its R&D investments and new market entry strategy. Nowadays, with the increasing acquisitions across the globe and autonomous driving technology revolution, the geographical differences are narrowing.

We run a lot of initiatives that are aimed at supporting Women in Engineering, what tips would you give to a woman starting out in engineering today?

In recent times, the engineering world is more aware of incorporating diversity in its workforce. Therefore, there are less barriers and stereotypes to overcome, compared to 20 years ago. Start your career confidently without any inhibitions; do not curtail your ambitions and above all, don’t be afraid to showcase your abilities!. In many companies, like Intel, there are a number of initiatives targeting diversity and inclusion – make use of it.

You have worked in embedded control applications such as infotainment, powertrain, chassis control, body and ADAS in technical, management and process areas, what is your current focus area at Intel?

At Intel, we have diverse portfolio of products for different markets. Intel and Mobileye design autonomous driving solutions with a focus on safety and scalability.

We are leading the industry in camera-centric sensing, crowd-sourced mapping, mathematical models for safety, and compute leadership. General purpose processors such as Intel’s Core and Xeon products as well as custom accelerators like Intel’s Neural Network Processor, FPGAs, or Movidius VPU enable AI solutions across the computing spectrum. My current focus area is FPGAs and also platform solutions with CPU and FPGAs for all types of workload including AI.

Could you share some details on your work on microcontroller and FPGA solutions for use in safety applications, which applications are currently the most interesting?

While the safety considerations of using MCU and FPGA in a conventional car in infotainment, powertrain and security solutions are interesting as always, the challenges of safety in an autonomous car is the most intriguing and interesting part of my work.  Designing and implementing safety as part of the architecture definition and the varied usecase considerations including electric cars poses some interesting challenges at work.

What are you looking forward to and hoping to get out of leading the functional safety workshop at Self Driving Track Days?

I am looking forward to meeting people from the automotive industry and share knowledge on functional safety between different players in the automotive value chain. I expect it also to be an interactive platform to discuss some, hitherto unanswered questions related to the fully autonomous, artificially intelligent, future car architectures.

Join Meena on 10 July 2018 at the Daytona Karting venue in Milton Keynes. Book your tickets here >>

Technical Workshop agenda announced for 2018

Six exciting ½ day workshops from industry and academia – Cranfield University, WMG, Parkopedia, Horiba Mira present deep-dives into important subjects, alongside industry partners on our popular Introduction to self-driving technologies workshop, which will feature segments from sensor experts, software and algorithm developers and other suppliers to the autonomous vehicle technology ecosystem.

The one-day training event and exhibition takes place on July 10 at Daytona Milton Keynes, and is the perfect event for technicians, engineers, new entrants or specialists in autonomous vehicle technology fields, with training delivered by the professionals working on autonomous vehicle projects right now, from commercial research and development to scientific endeavour.

View Workshop Programme

Technical training workshops

The in-depth workshops, delivered by leading researchers and scientists from across academia and industry include:

  • Introduction to self-driving technologies workshop – Led by our industry partners
  • From Robotics to Computer Vision: Self-Localisation in Autonomous Vehicles – Led by Dr Lounis Chermak, Lecturer in Imaging, CV and Autonomous Systems, Cranfield University
  • Getting your Deep Neural Network into a car – Led by Dr Sander van Dijk, Head of Research, Parkopedia
  • Understanding Automotive Cybersecurity – Led by Dr Madeline Cheah, Senior Cybersecurity Analyst, Horiba Mira
  • Sensor hardware, sensor fusion and relevant test methods for use in driverless vehicles – Led by Dr Valentina Donzella, Smart Connected and Autonomous Vehicles MSc leader, Intelligent Vehicle Group, WMG
  • Functional Safety Considerations for Autonomous Vehicles – Led by Meenakshi Rajamani, Functional Safety Consultant, Intel

The Introduction to self-driving technologies workshop, first run in 2016, will feature up to eight companies presenting not just their own technologies, but also explaining their place in the eco-system, challenges and applications, as well as expected industry developments in their sector.

Presentations from our event partners and sponsors, will give attendees a complete overview of what goes into making a self-driving car – Self Driving Track Days is the perfect complement to a theoretical or online training programme, bringing the virtual into reality.

View Workshop Programme

Autonomous technology demos

Along with a mini-exhibition, the venue also features a demo area where attendees will get a chance to see autonomous vehicles up close, talk to the people that work on them, and even an opportunity for demonstration drives.

It’s a great event, with a fantastic, positive atmosphere”, comments Conference Director, Hayley Marsden, “I’m really thrilled with the level of interest we’ve already picked up and am delighted to bring this event back to the home of connected and autonomous vehicles, Milton Keynes, so close to the UK’s industrial heart.  The agenda is strong and relevant, and it’s satisfying to know we’ll have a motivated technical attendance from the UK and beyond.

Fully catered, the event will continue into the evening with an informal business networking event, which will include a BBQ dinner and Go-Karting (charged separately).

Autonomous Demos
Read event review from July


Cybersecurity is a mind-set

Madeline Cheah, Senior Cybersecurity Analyst, HORIBA MIRA is delivering workshop on “Understanding Automotive Cybersecurity”

Delivering an in-depth workshop at Self Driving Track Days in July, HORIBA MIRA have taken time out to write an article for us on cybersecurity. Madeline Cheah, Senior Cybersecurity Analyst will be delivering “Understanding Automotive Cybersecurity”


Cybersecurity is really about the absence of behaviour – that is to say – undesirable behaviour.  It is compounded by the fact that it is hard to test for absence, and many of the vulnerabilities – whether that be design choices or implementation errors – are unintended.  How would you test for an absence of unintended behaviour?

There is no such thing as perfect security.  If we use the simple example of car theft – a criminal who really wanted to take a specific car would just pick it up and put it on a truck.  Instead, the aim is to make a system infeasible to attack, in the hope that attackers will go elsewhere.  And if every single system was infeasible to attack, then, in an ideal world, all would be protected.

However, this is not an ideal world, and what we have instead is a footrace between attacker and defender.  This is exacerbated by the attacker-defender imbalance, whereby the attacker only has to find one vulnerability to exploit, but a defender has to protect as much of the system as possible.  There is always a balance to be struck between what’s usable, what’s cost-effective, what’s reactive and what can be proactive.

Automotive Cybersecurity

There are several drivers in the automotive industry that have led to challenging issues in the cybersecurity arena.

Firstly, there is increased connectivity, both inside and outside the vehicle.  Secondly, there is increased complexity, with the advent of many systems that have been introduced for reasons of safety, security or marketability. Finally, there is convergence of technologies, between those that were designed for vehicles (for example advanced driver assistance systems), and those that were integrated into vehicles (wireless Internet connectivity).

There are no “strong” or “weak” parts of a vehicle; even what might seem like a trivial attack can be chained with other attacks to make the end impact potentially devastating.  Furthermore, with every feature that makes something more safe or convenient, there is potentially an equal amount of convenience for an attacker if sufficient defences are not in place.

Instead we talk about hardening the system, where we close as many security holes as feasibly possible.  There are some fundamental principles we can follow as a guide. We can use defence in depth, where there are multiple layers of security, such that holes in one layer are covered by another layer.  We can use the principle of least privilege, where by default, nothing is allowed, with the necessary functionality enabled one by one.  We can ensure that security is in a system by design, rather than being retro-fitted, such that the holes don’t appear in the first place.

System Boundaries

Currently, we can realistically draw a system boundary around the vehicle for the purposes of testing or analysis.  However, the horizon is full of vehicles that are connected to each other, to the cloud, to infrastructure, to peripheral devices and wearables. The vehicle will not just be the end target, but the means to an end, whether that be for a backdoor into infrastructure, ram-raiding, terrorism, privacy violation, financial crime and all other criminal activities that now take place through more conventional computing methods.

Cybersecurity in many ways is a mind-set.  Not everyone has to be aware of every technique or attack, but knowing when and where to involve a security engineer is crucial. Even if the actual product itself is purely mechanical, sooner or later, it will be attached to or integrated with an electronics system.  Presumably, the use of computing and IT is used for designing the product, for protecting the intellectual property attached to a product and for data analytics. Away from automotive technology, cybersecurity awareness can take many forms in many disciplines.  In design disciplines, we look at developing security that lay users can use to protect themselves. With psychology and linguistics, it could allow us to distinguish various types of threat actors.  Understanding of the law could help with questions to do product liability.


Here at HORIBA MIRA, we aim to be a trusted partner to manufacturers. That means working collaboratively with the project team every step of the way. Our cybersecurity related projects cover consultancy, concept development (security by design) and independent security assessment, whether at component, vehicle or lifecycle level.

We are also active in the research sphere, whether that be through collaborative projects funded by InnovateUK (such as 5Stars), through applied research internally, or through embedded PhD programmes in the business.  HORIBA MIRA is also heavily involved with the development of international standards in the field, with our experts representing the UK in the development of ISO/SAE AW 21434 (Road Vehicles – Cybersecurity Engineering) and ISO26262 (Road Vehicles – Functional Safety).

Join HORIBA MIRA at Self Driving Track Days in Milton Keynes this July. Book your tickets to attend >>

Will Driverless Cars Grow up in 2018?

From the driverless vehicle hype seemingly hitting a peak in 2017, this year began with the news that Waymo, a subsidiary of Alphabet (Google’s parent company) and ride-hailing giant Uber had settled their legal wrangle, with the former taking a sizable chunk of the latter.
On many fronts, this is great news – intellectual properties have been successfully protected, and two of the largest investors in trying to find solutions to the autonomous vehicle problem are now aligned.
Waymo’s rapidly developing test and development vehicle fleet is spreading across the US, and despite Uber’s own bruising encounter with the regulators in the UK, the company’s R&D division ‘Uber ATG’ has continued to invest and make strides towards creating robot drivers. The much publicised fatality involving one of Uber’s autonomous vehicles in March 2018 has sent a much needed shockwave across the industry, and will – despite the tragedy itself – undoubtedly serve to improve safety practices, virtual and closed-course testing.
At CES in January, the world’s press started to understand the scale and complexity of autonomous driving, being able to compare not just two, but more than a dozen companies demonstrating their technologies in one place – and get a taste of the differences between those that are starting on the journey, and those that are nearing the end, with viable products entering the marketplace in the not-too-distant future.
In the UK, the Government’s programme of funding groups of companies to embark upon research projects will ‘start to end’, with the last of half a dozen funding rounds opening for bids this year and dozens of the previously funded projects reaching a zenith in 2018, with public trials, demonstrations and public exhibits.
With more than 50 projects already match-funded by the taxpayer and various consortium members made up of academia, industry players, public sector and innovative new organisations and startups venturing in to this field, it’s reasonable to say that this joint investment by the public and private sectors has enabled the UK – if not to take up the global lead – at the very least, reduce the gulf behind countries such as Germany, Japan and the US.
There are various international ‘leader boards’ which rate country’s readiness for autonomous vehicles, all of which now agree the UK is Top 10 material… imagine where we were before £100s of millions of investment!
It’s important to note that government investment won’t ‘fall of a cliff’, because there will continue to be opportunities for funding support through tax incentives for R&D (through Innovate UK), promotional support to international trade shows (through the Tradeshow Access Programme), networking events for numerous specialisms run by the Knowledge Transfer Network and the various industry Catapults).
For consumers, however, none of that matters – other than perhaps a smattering of national pride. For you and I, as buyers of cars, what do driverless cars actually mean? Will we go out and buy one?
The story of driverless cars, first and foremost, is about safety.
Don’t get hung up on a computer taking away your pleasure in driving… merely replacing the driving that is not enjoyable, and perhaps help you avoid accidents when driving is difficult, in bad weather, when you’re tired, near a school. No driver, anywhere, could begrudge that, or want to put others in harm’s way.
Improved road safety alone can easily repay these investments, just for the UK… even discarding the commercial opportunities in developing and exporting national expertise. Let’s take a number of annual road fatalities in the UK at 1700 people. DfT figures tell us that the cost to the taxpayer, for each fatal accident, is about £1.8m, so multiplied that up, and that’s £3b every year, lost.
The real figures (both of fatalities, and of cost to the taxpayer of each one) are higher… but indicatively, you can see the return on investment into CAV technologies is clear. Nobody is talking about the 4000 fatalities caused by human drivers on the same day globally, or the multi-billion £GBP negative impact of that.
Aside from that, the people that write, enforce and are governed by various laws covering vehicles (actually more than 100 statutes going back hundreds of years) also need to understand the new technologies, and insurance companies need to understand where liability might end up in the future. Other businesses, such as those that employ drivers on long or short distance deliveries, need to understand how their business or employment models might change, and the education system needs to adapt too.  It’s those ideas which will form the foundation of a new series of web interviews we’re producing on AutoSens TV.
We have been playing around with autonomous vehicles, and helping to teach people about them, for a couple of years now, but 2018 will provide the largest platform to date to talk about autonomous vehicles to the Great British public, as the London Motor Show will have a new feature, a dedicated Autonomous Vehicle Zone.
For many of the 60,000 expected visitors, it will be their first encounter with autonomous vehicles, as they have mostly been confined to test tracks, workshops and secret test facilities.
But at a motor show at ExCel, in London? CES is the Consumer Electronics Show (with cars), the Detroit Motor Show now has a technology showcase… the worlds of technology and automotive have combined.
When you consider that the Goodwood Festival of Speed’s most popular stand was Tesla, the time is right for the UK’s biggest car events to embrace autonomy and ‘close the loop’, showing that the UK still has ambition to progress on the international stage.
Cars are connected and more automated than they’ve been before, and I’m excited to see how the public reacts to the next generation of technologies that will be seen on our roads.
If you can’t wait until May then there are lots of free newsletters you can sign up to, for example by Robohub, MCAV or Innovative Mobility Research – and if you really want to get your hands dirty, there’s always Formula Pi – scale model driverless cars that you can code and race your friends and even compete against other people remotely – or Self Driving Track Days, where engineers gather to share their technical insights to help accelerate the spread of knowledge to new companies across the supply chain.
In 2018, there’s really no excuse not to get out and learn about this new wave of exciting new technology, because it’s well on its way.