Driverless cars for small change?

Driverless cars for small change?
External sensor suite and processing: Raspberry Pi, power bank, Pi camera and Photon Cannon… or torch.

After stumbling across a YouTube video of a toy car made into a driverless vehicle, I contacted the creator, Ran Zhao, to ask him a little more about how he did it…

Ran built the car and developed a small convoluted neural network running on a Raspberry Pi computer, trained using about 200 labelled grey-scale images.

Road sign recognition working in poor light conditions at only 10fps
Path following, showing footage from the on-board Raspberry Pi camera in the top-right.

“The data has been extended so the network can handle never-before-seen road conditions and even drive in the dark. I got slightly less than 10 fps with both neural network and traffic sign classifier running at the same time.”

The video contains four scene types, driving to follow two variations of a marked path, and again using only torchlight, and stop sign recognition in daylight and torchlight as well.

“The play speed of the first scene is adjusted to be 3% faster just to synchronize the two videos. The rest scenes are of normal play speed.”

Do you have any training in this area, or is it part of your job?

“I just did it for fun. This is not exactly what I do for a living. My training is mostly done on YouTube and Google search.”

Tell us about the hardware and software…

Under the hood: Interfacing with the steering actuator and motor control

“The hardware is very simple, a toy car, a Raspberry Pi, a camera, a DC motor drive, a USB power bank and a 9v battery.  Tensorflow and OpenCV are the most popular tools for deep learning and computer vision. The alternatives would be software like Caffe or Theano.”

You’ve managed to keep the cost low for the project, how many hours did it take?

“The Pi and Pi camera cost about $45, the car was $10 and the USB power bank was about $8, so the overall cost is more like $80.

Road sign recognition in daylight

I built everything including hardware modifications and software coding in about three weekends. Most codes are from my old projects and I just reassembled and slightly optimized them so they didn’t overload the Pi.”

Are you planning a Mark 2?

“I am actually planning on Mark 2 which would be a self-driving and obstacle avoiding vehicle using deep reinforcement learning.”

What advice would you give other people learning about this technology?

“My advice to people who are interested in this technology would be ‘get your hands dirty’. The whole ‘deep learning’ may have been a mystery to many people but they are actually incredibly simple. From my perspective, the only way to learn and master would be to practice because that is what deep learning is all about. ”

Watch the video

Free driverless vehicle test track facilities

Driverless vehicle test track

We are taking over a large part of OAMTC Zentrum Teesdorf, pictured above, a large driver training facility near Vienna.

Test track features

We will have 3 outdoor test track areas including 200m acceleration / deceleration area, small autocross circuit (about 1km long with a variety of features including chicane, slalom and straights), and a figure of eight skidpad, all laid out to SAE / Formula Student specifications (most areas will be marked with cones).

All areas will be marshalled (as indicated by red triangles).

This will be free to use for any organisations (including Students / Universities) with autonomous or driverless vehicles to test.

Vehicles must have a back-up driver in the vehicle if in use.

Please note that this activity is not endorsed or supported by SAE or its affiliated organisations in any way.

FSD team registration →

What is LiDAR and why is it so important to driverless cars?

What is LiDAR and why is it so important to driverless cars?

If you’re an enthusiastic consumer of news about driverless cars, you will know that the almost universal hardware seen on most research and development vehicles is something called LiDAR.  But what is LiDAR and why is it so important?

Definition

LiDAR stands for light detection and ranging. It’s a name that hides what LiDAR really is, so let’s walk through those initials.

Li = Light. What most people don’t realise is that this light is infrared, invisible to the naked human eye, and emitted from the unit in a narrow laser beam.

D = Detection.  The unit looks for signals bouncing back from objects nearby, typically from 1m up to perhaps 100m away

A = and…

R = Ranging. Understanding the distance to objects in that 1m-100m bracket is really important, and this is more or less detailed, depending on the resolution of the unit and range.

Uses of LiDAR

LiDAR’s first and most established use has and is likely to remain surveying, whether from the ground or air (mounted on planes or small drones).

Applications include urban planning, countryside management, livestock counting, heritage and conservation, post disaster damage assessment, space (early versions of LiDAR were used to map the moon), even archaeology of sites to discover lost and buried buildings.  One very smart use is in forestry, where LiDAR can be used to measure the volume of unharvested timber on a hillside to negotiate a sale price before a chainsaw is even started!

It’s highly effective at creating a three dimensional map of an environment, whether man-made, natural or in-between, and updating that 3D picture several times a second.  Human vision performs the same function, but achieves it in a different way – your brain calculating the difference between your left eye and right eye to understand the distance and relationship between you and those objects in 3D space.

In-car view from a recent demo using a Velodyne LiDAR, showing various features & problems with LiDAR

Weaknesses

As with any system which relies on receiving a signal travelling in a straight line, LiDAR has one important vulnerability. It can’t see behind solid objects. That sounds obvious, but if your car has a single LiDAR unit and a cyclist pulls alongside the car, a comparatively large area is hidden from view. If a bird were to land on your car roof next to the unit, you could temporarily lose half your car’s useful LiDAR data!

Interference

Atmospheric conditions and other LiDAR units could, in theory, interfere with infrared laser signals.  Practically, however, both of these can be filtered out by software within the system, which is programmed to respond to a very slim frequency of infrared light, returning in a straight line to the unit.  If, in the exceptionally unusual event of a pulse clashing with a signal from another vehicle’s unit, the LiDAR can change frequency and carry on as normal within a tiny fraction of a second.

Cost

LeddarVu: a solid state LiDAR unit made by LeddarTech

Much is made about the cost of LiDAR units. High resolution units can cost upward of £10,000. These are reliable, and robust, and provide very high resolution data in a 360 degree circle around the car (if mounted on the rooftop, as it often seen on research vehicles).

Google have developed their own system and brought their unit cost down to around £5000, still providing high resolution data.

That cost per unit is far too high to be practical on a production car, so car manufacturers will turn to ‘solid state LiDAR’.  This is typically a lower resolution*, easier to manufacture and far cheaper alternative, with a narrower field of view.  The key benefit of this approach are individual units cost 95% less, between £200 and £500 currently, and will fall further in the next couple of years to around £50.

[*Much higher resolution sold-state LiDAR units are in development in 2017, which will generate more data points and higher resolution than moving units, so it won’t be long until solid state becomes dominant.]

Resolution

Just like cameras, LiDAR units need good resolution – that is enough data mapped in 3D, in enough detail, in order for software to interpret what objects are.

This is easy for us humans, but drop that resolution to a fly’s eye and is gets very difficult. Because the infrared laser beams are emitted in a widening beam from the source, the resolution close-by is great (enough to see facial contours and fingers), but far away, it’s dreadful (is that a lamp-post or a pedestrian?).

That, among the other issues cited above, is why LiDAR is one of several other types of sensor that is deployed in driverless cars.

The beams sent out from the unit bounce back when hitting objects and after mapping into 3D space, generate a ‘point cloud’. This is a complex group of points (depending on resolution, sometimes several million points created every second) which must be interpreted, usually into polygons. A polygon is a simpler shape to process, because – like the difference between a chocolate box and a wardrobe, the size, shape and distance make it easy for a computer to interpret and categorise.

LiDAR or not LiDAR?

One debate that rages is whether or not LiDAR really is essential to driverless cars or other autonomous vehicles. Tesla for example, think not (though their sister company Space X does use it on its rockets), as do a few other small suppliers, who choose to focus on cameras as being the main ‘eye’ for self-driving vehicles.

On one hand, that allows them to focus their efforts on other sensors (usually cameras, but also perhaps Radar as well) which are far more established, cheaper to buy, and have many more professionals in the talent pool available to do the work.

On the other, this means that rather than developing their software algorithms to interpret a rich 360 degree field of 3D data (which might be easier) they are restricting the robustness of ‘validation data’ from an additional sensor.

The debate will continue for at least another decade, until we have enough variety of driverless vehicles to understand the safety differences between the two approaches in real-world production vehicles.

In the short term, it’s not something you need to worry about… unless you’re building a driverless car, of course.

Autonomy vs Petrolheads

Autonomy vs Petrolheads

Hosting many events, we are fortunate to attract some really interesting people, whether they are already established in the driverless technology sector or moving towards it.

This guest blog is provided (with our gratitude) by Basileios Mavroudakis – a vehicle system dynamics (VSD) and controls expert, with a broad background on  simulation techniques (Basil holds a PhD on Simulation in F1). Basil recently worked for supercar manufacturer McLaren on a top secret project… 


Recently Roborace made history running two autonomous racing cars in Buenos Aires. According to reports one devbot crashed but the other managed a respectably quick lap clocking 186 kph (see the team’s tweets for updates). The racing incident is not a bad thing at all; rather a proof that Roborace’s visionaries really push the (performance) envelope.

A few months ago, Forbes published a very interesting story on VW’s autonomous race instructor, a performance oriented ADAS made feasible thanks to the advances in technologies similar to the ones developed by Roborace and, of course, Audi (the pioneering RS7 being the first autonomous car to decently compete against a professional racing driver), while Ford has recently filed a patent for an ‘Autonomous Racecar mode’. Meanwhile, the academic research community has been in the forefront of such developments, from Stanford’s pioneering Shelley to the announced autonomous Formula Student Germany for 2017.

Of course an autonomous controller’s ability to maintain control of the vehicle on the limits of the performance envelope and beyond is the key enabler for massive gains in active safety. However, there is also a more relevant to us petrolheads pattern to identify in the aforementioned schemes: the provision for performance-oriented operating conditions.

People who know me know that I love driving. I enjoy a safe, sane and spirited drive when the road conditions permit, with a responsive four or two wheeled vehicle. And I relish the pleasure of exploiting the vehicle’s limits in a controlled hence safe environment (i.e. track days). Do I fear that the rise of robotic cars will deprive me of the joy of driving? Au contraire! I am very confident that autonomy will relieve me of the mundane task of driving under unpleasant circumstances and also be the enabler for a number of technologies that will retain or enhance the pleasure while driving under certain others.

The dynamics of high performance vehicles have been my core scientific interest ever since I was a student. The optimisation of their performance under all conditions has been the prime focus during most of my professional life and is yet to bore me (I doubt it will ever) while the human factors aspects of the driver-vehicle interaction, often cited as subjective attributes, have dominated my free time since I learned how to drive and ride.

Fig 1. An autonomous drift controller’s function developed by the author

During my 15 years of engineering involvement and almost 25 on/in the driving seat I have introduced, developed and experienced a number of performance enhancing technologies; some solely focused on going faster around the track, others on providing a higher level of engagement and most of them both.

I can understand the reservations expressed by some fellow-petrolheads, especially from a puristic perspective: Where do you draw a line between aids and skill? Where, indeed? Is the use of racing ABS to be condemned? Of paddleshifts? And if so, then why not strip gearboxes of the synchromesh too in order to increase the required skill level?

I really don’t have an answer and am inclined to believe that this is a matter of flavour: some people will prefer to allocate more control effort to the said tasks while others (me included) to focus on dynamic states like side-slip angle and mu variations. I am also willing to accept that some times enhancing the performance envelope can have a negative impact on engagement: adding lap-time improving technologies can introduce layers of complexity resulting in a less transparent response from the vehicle and, of course, pushing the limits upwards usually makes for a less approachable car on the road, one that may require irresponsible risks to come alive.

And this is why I am so excited about the opportunities arising from the developments in autonomy: what’s not to like about a future performance car boasting a swollen performance envelope, the natural-feeling ADAS to help drivers go faster and hone their skills and a super safety-net to keep everyone, including the authorities, happy?

I acknowledge the excitement resulting from exploiting limit-handling close to and beyond our control bandwidth without any aids. I am afraid that this might become (if not becoming already) an exclusive pleasure for the privileged few.

What do you think?

Pushing the boundaries for driverless vehicles

I recently spent some time with Paul Fleck, President of DataSpeed Inc., at a closed circuit in France to talk through his company’s technology.

One of the technologies developed by his Michigan-based company has echoes from an earlier part of Paul’s career.  With a background in electrical engineering as a drive-by-wire specialist, Paul cut his professional teeth in the 1990s Ford-run F1 outfit Benetton, working alongside Michael Schumacher in his heyday.

And while fly-by-wire technology was later banned in F1, its evolution has had a lasting impact and is now regularly found in robotics and cars.

DataSpeed has taken the drive-by-wire approach to another level, and provides a multi-point interface to specific donor vehicles, in this particular case, a Ford Mondeo (also known as the Fusion in the US) which can then be used as a base vehicle for which autonomous vehicle software and algorithms can be developed.

Other companies produce similar hardware for other vehicles, of course, and we’ll be looking at those when the opportunity arises, but I’m writing this to put this vehicle into context – this is what you start with before you have an autonomous vehicle: a blank canvas research platform.

Some of the hardware in the boot / trunk of the car.

Three obvious bits of equipment sit in the front of the car: a laptop, small touch-screen unit and – somewhat interestingly – an XBox controller. There’s quite a bit more tucked out of sight behind the back seat, and some sensors on the roof as well.

“The touch-screen display enables us to turn power on and off our various electronics, engineers love because if something doesn’t work, they can reset it and it works again magically.”

“The laptop is acting as the computing platform bringing the joystick commands and converting them to a protocol on the CAN network, which in turn interfaces with our drive-by-wire system on the vehicle and that’s how we’re able to control it – using an Xbox controller.”

The laptop has some of DataSpeed’s own software running on a version of Linux called Ubuntu, along with RViz, part of the ROS system often found in robotics development.

Paul enables the computer control by hitting two buttons on the vehicle’s steering wheel, and a number of codes flash across the laptop’s screen. Paul flicks a few buttons on the Xbox controller, showing the brakes, throttle and steering control while the vehicle is parked, before we started to drive off.

“Normally, one person shouldn’t be the safety driver and controller.  How we’ve developed the system means that we can easily hand control back to the driver, or the driver can take control back whenever they want.”

Paul pointed out that the use of an XBox controller isn’t a core feature, it’s there to show off the control systems and interface with the actuators already in the vehicle.

“We’re marketing to engineers that need a development platform, they’ll put their computer in the back, a sensing system and then some software, then they can get started”

The handover problem sounds small and simple, but it’s quite complex.  Even Ford recently announced they were scaling back plans for partially autonomous vehicles and instead focusing efforts on fully autonomous vehicles, partly before engineers were relaxing too much to be able to regain control of the vehicle safety when prompted.  This reflects findings from various studies run in simulators at the University of Southampton which highlighted the delay between alerting the driver and successful handover as ‘alarmingly slow’ – but these are human problems, not engineering problems… hence Ford’s interest in removing the human completely.  No human, no handover problem.

“In 3 seconds, you can travel 200 feet, at speed.. “ says Paul, “remember, if the autonomous computer is handing back control it means that it does not understand what’s going on, and what you’re asking a human to do in that period of time is: 1) switch context from relaxing crossword to driving a car; 2) understand what the threat is, and then; 3) do an evasive manoeuvre.. and I’m not sure humans are capable of doing that.”

It’s also clear from the use of a computer game controller that it wouldn’t be a great stretch to find control methodologies in computer games to work with this system as a route to simulating environments and developing algorithms, for example Microsoft’s recent release of sensor simulation software for drones, or OpenAI’s release of a driverless car simulator in GTA V.

Here’s a contradiction in terms: How can you be a petrol-head and passionate about autonomous vehicles?

“Well, it’s still going to have an engine… when you’re a petrol head, you’re probably more technical,  into new technology and there’s a lot of new technology in autonomous vehicles.”

“I’ll be honest, this car right here is probably more sophisticated than an F1 car right now, it’s hybrid, highly engineered, there’s a lot of complex electro-mechanical systems, a lot of technology just in this vehicle. Unfortunately, I think the UK would be further ahead in ‘by-wire’ today had the FIA not banned that, because all that technology was being developed in England – I was involved with that over 20 years ago.”

At this point, Paul suggested we switch into navigation using GPS, showing a readout from the pair of NovAtel GPS receivers mounted on the roof, indicating latitude and longitude, as well as the standard deviation (i.e. the level of accuracy derived from the signals before further processing or sensor input) of about 30cm.

GPS has weaknesses, between electromagnetic atmospheric disturbances, and particularly in the UK, woodland and rain. It also needs a lot of computing power.  Ever wondered why your satnav gets hot, or your phone eats battery power so quickly when you’re using it for navigation? It’s all about the maths.

You see, while GPS signals travel at the speed of light, they are coming from a long way away. By the time the signal has traveled the distance from multiple transmitters to one static point, both the direction and distance to the target will have shifted.  Add to that the fact the Earth is moving… and the receiver unit is moving too, and you start to wonder how anyone made it work in the first place.  Doubling up on that level of calculation several times to get the accuracy needed thus becomes very difficult.  That’s not all, there’s more…

“GPS is a line of sight modality, it’s not just vulnerable to bad weather and trees.  In cities you get the urban canyon effect, so you don’t get a lock on enough satellites to get a path because the signal can’t go through buildings, which is why we also add inertial measurement – an IMU” explains Paul.

A connection to the local cellphone network allows for corrections to be made to the received GPS signal.

“For this circuit, if I just wanted to drive around, that’s more than enough.  If we can hook up to the cellular network, and use the RTK (real time kinematics) we can get that down to around 3cm, but you’re probably not going to use your GPS for on-road detail navigation, you’d use your LiDAR and camera systems to see curb details”

To demonstrate the benefit of the LiDAR, on this vehicle a Velodyne puck mounted in the middle of the roof, we parked up by the edge of the circuit, near some trees, a tyre barrier and on old gateway.

LiDAR uses infrared laser-focused beams which shoot out, invisible to the naked eye, in a straight line. Because these are photons (albeit not visible to humans), they can be blocked, and an obstacle in the way will cast a shadow, blocking from view anything behind the obstacle.

The Velodyne LiDAR puck is a common sight on driverless vehicles in R&D, but probably won’t be seen on production vehicles in the same form.

To create a map, whether for following a path in real-time or recording a 3D environment for future use, you need obstacles fairly close-by for the laser to hit.  That’s why we’re at the edge of the circuit, rather than in the rather flat, featureless centre. In an urban environment, there would be more buildings, pedestrians and objects in general.

 

“For the LiDAR to work for SLAM (simultaneous localisation and mapping), we need things for it to see – we can see the wall there, even the foliage on the tree – if we humans can see that and understand it, then a computer can.  We can record all this as point cloud data in a file, at the same time we can drive in a small circle in this area – relative to the objects around us.  We’re also recording the speed as well.”

This video grab below shows output from the LiDAR displayed on the laptop. The arrow heading in front of the car is the shadow cast by me, acting as an obstacle to the infrared laser beam – behind that shadow, everything is invisible.  This is the main reason why multiple sensor systems are used.  The GPS readout is visible on the left of the screen.

Paul replayed the recording, and I watched as he withdrew his hands from the wheel and feet from the pedals and we moved away, the laptop in front of me clearly showing the point cloud that we’d recorded just moments previously, with tree branches, barriers and cones all mapped out in 3D.

The computer shows the computed path ahead of the car icon on screen as a purple line, constantly calculating the robotic commands needed to control the car against the inbound data stream from the corrected GPS data.

The mapped path was displayed on the laptop as a thin purple line drawn out ahead of a small car icon, with a few blue blobs immediately in front of the icon to indicate the exact position that was being calculated at that moment.

“There’s a lot of maths to do this properly” Paul highlighted, as the laptop continued to warm my knees. “But take the GPS, LiDAR, put a camera on, tie in your GPS to a map database, you’re ready to start developing your fully autonomous vehicle.”

OK, so we’re all set?  Not so fast. Actually, you need more than just a development platform.

“You need engineers – Mechatronics, by-wire systems, design and build of sensor systems, mechanical engineers for integration, probably the most difficult is the algorithms – the type of software that’s being developed is not web development… it’s highly maths based, and implementing high level maths as algorithms in a programming language is hard.  In a development environment, we’re using ROS and certainly Python, C and C++, but for production you’re always going to be looking at safety rated operating systems, something like VX Works, an i7 platform with an NVidia processor for vision, safety rated and validated – the software is the most difficult part for both R&D and the transition to production”

Next up on our ranging conversation was testing. So much has been made of building physical environments for testing vehicles, every country has test tracks and many manufacturers and even suppliers do too.

“Doing testing like this is expensive.  You can only run so many scenarios. Putting together simulation environments and doing 100,000 variations and use-cases is very important, you’re not going to be able to do that manually.”

“Engineers make assumptions, we don’t design anything to fail, but look at Apollo 13 or the Shuttle programme, it’s the edge cases that crop up and cause problems.  We’ll minimise bad things happening with autonomous vehicles, I don’t think you’ll ever eliminate them, but overall driving will be safer.”

The average age of cars on the road is about 10 years, even if all vehicles are made autonomous ‘tomorrow’, it’ll be ten years before we get 50% penetration.  But that won’t happen for at least a decade, so already we’re at 20 years and that’s only half. For the other half, at least another decade, taking us to almost entirely autonomous by 2047.

I asked if Paul could envisage a future that we’ll only be allowed to drive on a track, an exotic past-time like we see flying today?

“I drive an Aston Martin, I don’t want that to be autonomous, I want the experience of driving a vehicle like that. I don’t want the computer to control it, I want to be the driver!”

Nobody really knows.  We can all gaze wistfully into the future, but any type of improvement to safety on roads must be taken seriously.

“Within the next 10-20 years we’re going to see a large amount of transformation in the transport and mobility industry.  Look what Uber and Lyft have done to the use of the Taxi and the Taxi driver, with ride-sharing.”

“One of the things in the automotive industry, what the OEMs have to pay attention to, is that utilisation of the vehicle is around 5%, something with a value of tens of thousands is just sitting in the driveway.  Uber drivers aren’t just using their own cars for themselves or their rides, they’re opening the opportunity for fewer cars in the future.”

The final demo of the afternoon was a high-speed circuit, and we recorded a circuit at a fairly brisk pace – certainly not race speed or particularly close to what anyone would call a racing line, but if you happen to have access to a closed circuit, it would be rude not to put your foot down just a bit.

Paul made a point of saying that the car’s replay of the lap would highlight problems that many driverless vehicles have, and that I highlight above as well: compute power.  This was, he reminded me, a ‘blank canvas’ R&D platform, not a supercomputer-equipped production model.

Remembering back to the path prediction line plotted by the laptop, each time we drove at enough speed to catch up with the planned path (as calculated against the GPS plot) the car lost accuracy, but remember, we’re in a far from ideal environment and only using one of the systems – many more would be essential for a robust road or race experience.

Deliberately pushing limits of a vehicle’s technical capabilities is bound to end in problems, but that’s the nature of R&D – and it’s very different to what anyone should expect to see in production vehicles.


Paul will be bringing his vehicle to the Self Driving Track Days events in April (UK) and July (Austria).

If you’ve found this article interesting, you can find out more and take part at one of our upcoming meetups, or attend one of our one-day introductory workshops.

Paul will also be on-site during AutoSens US, which is taking place in May at the M1 Concourse in Detroit, Michigan.

AutoSens is our flagship project, and the world’s leading vehicle perception conference and exhibition. Aimed at engineers already working in industry, it sits alongside the Self Driving Track Days project and provides us with great access to industry experts around the world.

Inertia – Another essential ingredient for driverless vehicles

Inertia – Another essential ingredient for driverless vehicles

I know of at least three driverless R&D vehicles which rely on an inertia-based sensor to act as a back-up or supporting technology to other methods of path plotting and navigation (usually a combination of GPS and Lidar, depending on the hardware and software deployed). 

While the systems found in commercial jet liners might cost tens of thousands, and boast extraordinary accuracy over thousands of miles (and have a price tag to match), inertial measurement within a smartphone might have coarser accuracy and cost a few pennies.  Needless to say, the units often found in autonomous vehicles sit somewhere in between – both in terms of cost and resulting accuracy.

As we know, it’s important to have a back-up system to provide a fail-over in case the first one or two fail, or don’t have enough data – that’s the same for navigation as it is for obstacle avoidance or dealing with difficult weather or terrain – but inertial navigation is far from perfect. 

Special thanks to Tiziano Fiorenzani, UAV Design, Guidance and Control Expert, who kindly gave us permission to share his blog on how inertia measurement works.  This is part of a mini series of personal blogs he’s producing about the main sensors that are commonly part of any drone – you can get more updates by following Tiziano on LinkedIn.

What is an IMU?

IMU stands for for Inertial Measurement Unit, and in fact, measures “inertial quantities”, as accelerations and angular velocities. Those quantities can be used directly for  an automatic feedback control loop (eg, the gyro that stabilizes RC helicopters) or we can process their data out and estimate the Attitude (roll, pitch, yaw or quaternion).

What is inside an IMU?

Usually an IMU consists of the following sensors:

  • 3-axis accelerometer: measures the accelerations along its axes
  • 3-axis gyroscope: measures the rotational velocity around its axes
  • 3-axis magnetometer: measures the local magnetic field components along its axes

This setup is made out of 9 sensors (3 sensors on 3 axes), so it is generally referenced as 9-DOF IMU.

Accelerometers

Accelerometers sense all the accelerations applied to them, even those due to vibrations or manoeuvres. Isolation is of primary importance as well as an accurate calibration.

There is a simple way to figure how an accelerometer works. Take a scale and put a weight on it, say 1 kg (or 1 lb if you like, but I’ll stick with the International System). The scale will show you 1 kg, because the object’s mass of 1 kg, subject to the gravity acceleration of g = 9.8 m/s^2 (1 g), will experience a force of F = mg = 9.8 N (Newton), that is 1 kg force. Easy enough. Now, if you grab the scale and suddenly moved it upward, you would read that the same object has grown heavier. What does it mean? Well, the scale measures the vertical component of the forces applied to it: naming a the upward acceleration we impressed, the same mass m multiplied by a higher acceleration (g + a) results in a higher force F = m(g+a).

The sensor converts the acceleration into a voltage, which is later translated into a binary number that an autopilot can understand.

There are different types of accelerometers, though the most common are Capacitive and Piezoelectric, which basically measure voltage variations due to the sensor’s deformation. Check the references for more details.

When an accelerometer is used as inclinometer, the gravity vector components at rest are measured in order to evaluate the tilt angle as:

You can verify the effectiveness of an inclinometer with your own smartphone: install an app and verify if a painting has been hanged on the wall on a level plane. It won’t be straight if you move your phone, as the inclinometer is affected by the extra accelerations you cause.

Gyroscopes (usually shortened to ‘gyros’)

Gyros measure the rotational velocity around their axis.

Let’s simplify how it works by imagining a tiny mass, connected to a housing by micro springs and forced to oscillate at a constant frequency. Then imagine the housing connected to the frame by traversal springs.

Any rotation of the system will induce Coriolis force in the mass, pushing it in the direction of the second set of springs (if you are not familiar with Coriolis, check the references below, but basically is a force that applies to something that accelerates in a non inertial frame, like the Foucault pendulum).

The displacement is measured by sensors that are along the mass housing and the rigid structure. As the mass is pushed by the Coriolis force, a differential capacitance will be detected.

Even if the principle is different, and is not based on Coriolis force, one way to visualize a rotational velocity is the swing ride (even though when you kick your partner out he will experience Coriolis force…).

The faster the swing spins, the higher you go. That is because of that fictitious force named centrifugal, that is actually the chain preventing you to fly away. Your altitude is basically a measure of the ride’s speed.

Measuring the rotational velocity is of primary importance, as it can be integrated to obtain an estimate of the actual tilt angle and represents a great signal for feedback control loops. Unfortunately, gyros are noisy and their output at rest varies with the temperature.

Magnetometers

Magnetometers… I hate them! Seriously! But, they are the only sensor that can give your heading. A magnetometer measures the local magnetic field components and you can compare those values with the World Magnetic Field Model in order to estimate the attitude, and thus the heading respect to the local magnetic North. Why do I hate them? Because almost everything affects the local magnetic field… electric lines, Sun activities, internal wiring, other sensors, transmitters or even the CPU… That is the reason why you want to put your magnetometer as far from any interference as possible. Still what you measure is your attitude respect to the local magnetic field: the local declination is added up in order to obtain the heading respect to the true North.

If you want to have an idea of how the magnetic field is easily affected by external disturbances, use a compass or install an app on your phone and try to walk in the office, close to the computers or metal objects: you’ll see the needle going nuts.

Miniaturized magnetometers are based on the Anisotropic Magnetoresistance phenomenon: basically the material changes its internal electric resistance when exposed to a magnetic field: check the references for more details.

Calibration

Sensors would be useless without a proper calibration. For accelerometers and magnetometers you would probably do it once in a while, while gyros need to be calibrated every time before starting.

Why calibrate? Because the output signal is usually as:

measure = scale_factor*signal + bias + noise (m = sf*s + b + n)

The real signal is multiplied by a scale factor and then corrupted by an almost static bias value and a random noise.

While the noise can be easily filtered out, scale factor and bias are almost constant, thus they must be evaluated properly.

For the gyros, the bias is easily estimated before flying, when the vehicle is assumed to be still on the ground, so the actual measured rotational speed is only the bias component.

For accelerometers and magnetometers, the procedure is a little more complex, but basically you want to put your autopilot at different angles and estimate the best fitting sphere out of the measured quantities.

What have we learned

The world of autopilot surely is intriguing. We have just begun to scratch the surface and we still have a long road ahead. IMUs are fundamental to Robotics and Drones just as much as smartphones.

References

SensorWikiaccelerometersCoriolis forceFoucault pendulumgyrosEarth Magnetic Modelaccelerometer calibrationmagnetometer calibration algorithm

50+ vehicle perception videos

We’ve written about who this project is aimed at, but for people already working in vehicle perception or ADAS, as the industry generally talks about, what we have to offer through Self Driving Track Days really is too broad.

For those image processing engineers, machine learning developers, data engineers and roboticists already working on advanced driver assistance systems and automotive research and development, we have a sister event: AutoSens.

Running in Europe (Brussels, Belgium) and the US (Detroit, Michigan) every year, the AutoSens vehicle perception conference is the leading conference and exhibition for engineers working in the vehicle perception industry.

The event generally sees around 300 professionals from across the supply chain taking part in dozens of conference sessions on photon-sensor based detection, signal fusion, environment processing, and deployment.

As an industry-only event, cutting edge technologies at component level are demonstrated and discussed – technologies which will be used in production vehicles of the future.  One example of this is the broad understanding that dynamic (i.e. moving, high resolution) LIDAR units are only suitable for R&D, and that the future lies with solid state LIDAR.  This has prompted much questioning in the mainstream media over when these technologies will become available – and yet half a dozen companies were displaying these as finished products at AutoSens in 2016.

What this told us is that there is a significant disconnect between what the industry is really doing and what the media are talking about, and this can only be improved by educating people and of course, the media.  At the tip of the hype cycle, there is a great hunger for news and not necessarily a great deal of background research or context for the innovation or idea.

Without trying to meet every person face to face (a near impossible task, despite our efforts in the UK with countless networking events) or read every article, the best way to do this is by making useful information available to everyone, for free.

So, we have uploaded every conference session from the event to AutoSens on YouTube, more than 50 presentations from car manufacturers and suppliers, universities, engineers, technologists, researchers, AI specialists and more – so everyone can have that knowledge for free.

Explore the videos and playlists on the AutoSens video portal.

Interview: Nicolas Du Lac, CEO of Intempora

Nicolas Du Lac, CEO of Intempora

As part of the Self Driving Track Days project, we are keen to bring real industry experience out from behind closed doors and make it available to the next generation of potential engineers and innovators in the driverless technology sector.

Nicolas du Lac is the CEO of Intempora – Multisensor software solutions.  Graduated in 2000 from the famous engineering school Mines-ParisTech in France, with a major in Robotics. He was quickly recognized for his skills in development software and algorithms and worked for a few months at the CEA (the widely respected French Alternative Energies and Atomic Energy Commission) as software engineer, and joined Intempora in 2001

The company had just been created based on the RTMaps technology (Real-Time Multisensors Applications), developed initially from the Center of Robotics of Mines-ParisTech.

The company’s Chief Technology Officer since 2002, he led the development of multi-sensor software solutions and participated in numerous research project as DARPA Urban Challenge (Dotmobil Team) with the INRIA research institute in 2007.  Due to his work and involvement in the company, he was appointed as Chief Executive Officer of Intempora in June 2012.

Today, he manages the strategic and the embedded software development for ADAS and automated driving.

Tell us about the company – where are you based and what do you do… We understand you have a long background in autonomous software development?

Intempora was created in 2000 based on the RTMaps software framework which core technology has been developed from 1998 at the Center of Robotics of Mines-ParisTech. Researchers were already working on computer vision and data fusion algorithms for perception applications in ADAS and Autonomous Vehicles prototypes.

At this time, we were working on Pentium computers, with PAL analog cameras and first samples of Lidar and radar sensors.

Now the market is booming and we have accumulated more than 15+ years experience in developing software solutions. This long experience allows us to propose a mature and easy-to-use solution with unprecedented execution performance for testing and the development of complex algorithms, ADAS applications and Automated Driving systems.

What about Artificial intelligence?

There is much hope around artificial intelligence and encouraging results. However, it is still uncertain how to integrate such algorithms in autonomous vehicles and put them on the road, particularly when it comes to validation and certification against the ISO26262 standard.

At Intempora we are working on solutions with partner such as dSPACE to provide testing and a validation toolchain to allow the management of high volumes of recorded sensor data in large-scale computing architectures in an efficient manner.

This year started with lots of activity at CES – you were there – what were the highlights?

It is fun to see how the autonomous vehicles topic is now trusting the generic Consumer Electronics Show (apart from the thousands of IoT startups which were there as well). Most of our customers and partners were there and presented really nice demonstrations on so many topics (new sensors at LeddarTech, new systems for perception at VALEO and NAVYA, driver HMIs at Visteon, simulators at ESI-Group, communication, safety and security, etc.) It was interesting as well to see how companies like NVidia could present their GPU technologies used for both gaming and machine learning for Automated Driving.

RTMaps is your main product for managing sensor fusion, what other systems does it work with?

RTMaps (Real-Time Multisensor Applications) is not only a sensor fusion tool, it really helps developers build multimodal applications faster and in an efficient way (advanced HMIs, driver monitoring, communication systems,…). It is a generic software integration tool with hardly no overhead in terms of CPU consumption and a huge set of off-the-shelf software components to be integrated in a few clicks. It allows for your team (engineers, researchers, students) to focus on advanced algorithms and on managing multiple high-bandwidth data streams from various sensors (cameras, LiDARs, radars, CAN bus, GPS, Eye trackers, Biometrics, etc…), you can easily reduce costs and development cycles on your algorithms.

We have recently opened the RTMaps Components Store and the accompanying partnership program to go one step further and propose a way for algorithms and function providers to deliver their code in the form of RTMaps components, hence facilitating their integration and test in customer applications. Anyone interested in joining the ecosystem as technical partners is welcome.

Do you work with Car brands (OEMs) or always with Tier 1 and other Tier 2 companies?

We definitely work with OEMs, Tier 1 and Tier 2 companies, but also with universities, education and research institutes. We work more and more outside of Europe with customers in the US and in Asia. This is boosted with the recent partnership we established with dSPACE, for instance.

All this ecosystem is coherent and the benefits of efficient software tooling for their internal developments but also for establishing partnerships, exchanging software modules and datasets.

Of course these companies have different activities and use our software solutions in different manners (from sensors and systems testing and benchmarking up to real development of advanced embedded systems).

This is a great opportunity for us to adapt our solutions constantly to provide efficiency and ease-of-use in different environments and of course a better user experience.

Note that RTMaps is also used in totally different domains such as race sailing ships! Half of the vessels racing on the Vendée Globe 2017 was equipped with RTMaps!

When do you think we will see driverless vehicles regularly on the road?

Everything depends on what we call “driverless vehicles”. We are still at the beginning of the release of eyes-off automated vehicles (vehicles which can follow a lane, or reproduce well known path while avoiding obstacles). Most demonstrations take place on highways in the US, but vehicles which can safely drive in unpredictable scenarios, adapt their behaviour in unknown situations (weather, urban circulation) while achieving reasonable speeds are still a long way to go. There are still a lot of efforts to be done before we are be able to sleep quietly in autonomous cars.

What are your plans for 2017?

It will be another very exciting year for us with the planned release of a major version of RTMaps embedded and a new great product from our dSPACE partners dedicated to ADAS and Automated Driving. The company, the business and the team are growing and the technology is rapidly improving. We have recently changed our support service to be more efficient. We will continue to work hard to propose a better user experience to our customers, we care to listen them and enhance our solutions.

We have already planned many events as well. We will be at the Autonomous Vehicles Technology World Expo in Stuttgart in June, maybe at AutoSens Brussels in September and Tech.AD Detroit in November.
We also work on great projects around machine learning systems validation such as Cloud LSVA with IBM, INTEL, TASS, Tom Tom, Valeo, Ertico, CEA or Vicomtech and some European universities.

Why is Intempora supporting Self Driving Track Days?

Self Driving Track Days is a great opportunity for us to show our solutions during workshops and demonstrations.
We have 15+ year history in these application domains, our solutions were embedded in robotics and semi-autonomous vehicles since 2002.

Right now, it’s important for us to offer expertise and share views with numerous engineers and researchers from the domain. This series allows us to meet directly engineers, researchers or students.

Finally – what advice would you give people interested in having a career in vehicle perception and driverless technology?

Have fun and put your passion at the service of people. AI and software for autonomous vehicles are building the future of transportation. For sure, transportation and mobility will be the next revolution.

The competition is hard but necessary to offer the best for everyone. I can’t wait to see what the next decade will look like

See you at SDTD in France, England and Austria!

How would you tackle the Driverless / ADAS skills deficit?

There’s a repeating theme within the technology ecosystem that acts as the foundation to driverless cars and its widespread precursor, ADAS.

Irrespective of whether a company is a disruptive VC-backed force, an established Car manufacturer or one of their suppliers (OEMs, Tier 1 and 2s, in auto industry parlance) there is one single problem which not one can resolve.

That problem is the skills deficit.

This challenge is three-fold.

  1. There are amazing universities conducting exciting research, but this is comparatively slow moving, taking years to reach a conclusion and even longer for any resulting technology to be commercially available. TakeawayNew technologies and their creators take too long to enter the market.
  2. Universities, which have developed supporting courses to explore and exploit this organisational expertise, are slowly churning out brilliant graduates with very little applicable experience to prospective employers. TakeawayThe technology is too expensive for all graduates to get experience on during their undergraduate degree.
  3. And for everyone else, whether they are universities trying to break into the sector with new courses, or employers seeking to grow their capability or breadth of education within their workforce, the speed at which they are able to launch new courses or expand is far slower moving that the industry needs. TakeawayUniversities are not built to move as fast as a new technology once it nears and enters the market.

Let’s say that the number of universities offering relevant courses increases every year, and each churns out 30 capable graduates 3 years later, that doesn’t do anything to keep up with the speed of growth in the sector. Likewise, even if the number of courses doubled tomorrow, those students would have very little practical experience and thus, not be able to contribute to the available talent pool for several more years. The same can be said for experienced professionals, who are even more in demand and currently the most mobile of employees – experienced professionals I have spoken with regularly skip between continents to change employers.

This is working up to create a perfect storm, and could be the greatest stalling point to the development and widespread adoption of driverless vehicles. The falling line on the graph at the top shows a steadily increasing imbalance between industry needs and available employees..

Half of the startups I have seen in the sector are bourne from university research projects, whether they remain in-house to be incubated or belong to academics or students who decided to “go it alone” – it’s a fertile place for new ideas and technologies.

Even established military technology companies are having to shift how their business paradigm interacts with the world of new technology. I recently ran a networking meetup to bring more people into the fold for driverless technology, and an internationally recognizable military technology company brand were in the second row, looking for new ideas. No longer the domain of Billion dollar industry, new ideas and experts in new fields are highly sought after, and nobody knows where they will appear – even companies making spy satellites.

So who should counter this problem within the ‘Intelligent mobility’ sector, and how? We know that no one single car manufacturer or the various supply chain companies could, irrespective of their size – it’s not what they are set up to do. They rely on the education system to do that.

But what they can do is coalesce around projects and platforms that might, at the very least, add practical experience. It’s in their best interests, because in 5 years’ time, there will be a massive shortfall in employable talent, and they will be the ones that feel that the most.

We need more good training courses, delivered by a greater number of capable Universities. Udacity’s online training has proved popular and has huge capacity, but while many employers are on board in supporting its content, and recognition as a viable feature on a potential employee’s CV, like any training or qualification – and I cannot stress this enough to graduates or prospective employees in any sector – practical experience validates its worth.

There are now more than a dozen driverless technology competitions globally, from 1/24th scale up to full size platooning competitions, many of which in North America, with a handful too in Europe, and these are a great additional training ground to validate that training.

But how many large or international competitions are there, validated and designed around countering those shortfalls and skills deficits? At the moment, the largest has a few dozen participants, nothing like the scale or ambition of equivalent competitions in, say, student motorsport. Formula Student and Formula SAE boast some 500 university teams worldwide, but only 18 or so within that competition are working in the new Driverless formula, running in Germany this year.

We created Self Driving Track Days to help lower the barrier to building this experience, whether universities or companies see it as an opportunity for testing vehicles, or trying to find out whether a non-Automotive industry company might have something to offer from their expertise in video graphics, AI, big data, robotics, mapping, software engineering or some other area.

These events allow experts from around industry to engage with students or companies outside the traditional supply chain and bring them into the fold. Advice, experience, exposure to the technology, all of which working towards a longer term aim of getting more people interested, informed and involved.

We are far from finished. We know that Self Driving Track Days is, and will continue to be, a product in development. We are changing it, developing it, adding to it, but we felt we needed to do something to make a difference. We are not VC backed, we don’t have shareholders, but we are very proud to work with people across the industry that we admire, and likewise, receive their support for what we do, whether that is this project, AutoSens, or the other activities we’re planning over the next couple of years.

We’ll be working with established players, on different continents, having informal and formal partnerships to make exciting, worthwhile and valuable things happen that benefit the industry – and we are far from finished, in fact, the SDTD project is only four and a half months old. For us, it’s very early days, but I hope that you will be part of the journey.

We might not always get it right, but we will keep going until you think we have.

So now it’s your turn. What are you doing?

Infographic – The surprisingly long history of Driverless Vehicles

The lovely Daniel Dixon at Get Off Road, a UK-based Land Rover parts specialist, got in touch recently and asked if we’d be interested in publishing an infographic he and his team have been working on.  “Of course“, we said, “send it over!“… which they did… and here it is!  You can also download a printable PDF (helpfully split into A4 pages) which weighs in at 15MB.

The infographic show some highlights of the surprisingly long and potted history of driverless cars, from Alphabet’s Waymo (also known as Google Driverless Cars) back in time to 1939’s GM experimental ‘Futurama’ exhibit, with a healthy dose of DARPA’s Grand Challenge thrown in as well.

If you think this is interesting, you can catch up with the latest news at our free monthly networking meetups (upcoming meetups are listed and bookable on the menu), which take place around the UK and occasionally elsewhere in Europe.  Join the meetup group for free or join our monthly newsletter mailing list (form on the right!) to get updates on our events (including meetups), blogs and other activities.

the-growth-of-autonomous-ca