2017 UK training event review

2017 UK training event review

Our second UK training workshop, held at Daytona Karting in Esher, Surrey – close to central London, was a success – bringing together dozens of different companies to explore driverless vehicle technologies.  Our popular “Introduction to driverless vehicle technology” workshop ran twice.

Speakers

Tim Swanson, from AutonomouStuff, describing the company’s offerings in R&D research platforms, as well as additional engineering services relating to the four cores of autonomous vehicle construction.  Tim also ran through data management (a service offered in conjunction with Quantum) and sensor suite approaches.

Kevin Doherty, NovAtel explored GPS and GNSS technologies, highlighted system approaches, accuracy issues and possible solutions (including RTK and inertia measurement), as well as how those systems.

Eric Prineau, from LeddarTech, described principles of LiDAR – the primary seeing system used in autonomous vehicles, and how solid state units will enable lower unit cost and simpler data collection for production vehicles and Philippe LeBon, from Intempora, explored the company’s sensor fusion and categorisation system – RTMaps – describing recording of data from multiple sources, with storage and data manipulation in a modular fashion in R&D vehicles.

Anish Mohammed, security advisor for Privacy Shell, hosted a short talk on driverless vehicle security after lunch, before an update from SDTD co-founder Alex Lawrence-Berkeley on available UK funding for connected and autonomous vehicles, a highlighted round-robin of useful online resources for further learning and standards, future events and the Formula RoboKart project – a vocational ADAS/autonomous vehicle competition in early stages of planning.

Attendees and demos

Attendees came from across the automotive industry, as well as academia and independents, including Jaguar Land Rover, Intel, ADIADA, Ricardo, Huawei, PTL Engines, NCC Group, Bestmile, Parkopedia, Vianty, Autocar Magazine, Qualcomm, Nova Modus, Street Drone, SoftKinetic, Samsung, APTCore, OXTS, Veripos, Mipd, plus Surrey, Loughborough, DeMontfort and Nottingham Universities.

Ride-along track demos were delivered by DataSpeed and Anthony Best Dynamics, respectively showing off their R&D and test platforms to attendees, and on Monday we were also joined by the team from Formula Pi.

Each workshop day concluded with a panel Q&A session, which teased out some thorny often stimulating ethical, technical and legal questions.

World exclusive talk and technology demo

Dr Torquil Ross-Martin, founder of AutoRD, gave a very well-received presentation on his company’s driverless motorbike, based on a BMW C1 – the first time the prototype system had been seen in public – as such a world exclusive! Describing the development and prototype engineering processes and systems in use – many of which he and his team had custom developed – attendees were left both bewildered and startled by the remarkable achievement of creating a two-wheeled vehicle that could start, stop, steer and drive without human intervention, gyros or additional wheels.

The AutoRD team used the track for testing throughout the event, and were able to perform on-track demos while engineers answered questions from the audience assembled on a nearby viewing platform.

Monday night fun

The busiest day (Monday) ended with a one-hour endurance karting race, fought by 14 two-person teams (team-mates were randomly allocated) made up from sponsors, speakers, workshop attendees and members of the Self Driving Track Days team.

A burst of rain during the warm-up meant the first dozen laps were fraught with spins and accidents, but the shower passed, the track dried, and race soon settled into a thrilling battle of driving skill – with speeds reaching 65mph and top teams completing more than 60 laps!

Gregory Epps (React AI) and Moe Hashimi came in third, Steve Grzebyk (DataSpeed) and Alex Lawrence-Berkeley (Self Driving Track Days) coming in second, and Daniel Eastwood (PTL Engines) and Ian Hailey (Vianty) finishing top, a full lap ahead of the second place team. Fourth, fifth and Sixth place teams were separated by only a few seconds after more than 60 laps – an impressive level of driving prowess from a group of very inexperienced racers!

Following the race, our hosts at Daytona fired up the BBQ and everyone enjoyed a delicious alfresco dinner.

Future training in Europe and the UK

The final 2017 workshop for Self Driving Track Days will take place nr Vienna in Austria, and provide at least five brand new ½-day training workshops, on-track testing time and driverless technology demonstrations.

AutoSens, our professional vehicle perception and ADAS engineering conference – will take place in Detroit, in May, and Brussels in September.  Both conferences are well attended technical events, and aimed at established engineers in the ADAS and vehicle perception sectors.

Self Driving Track Days will return to the UK in 2018, so please join the mailing list if you’d like to be kept informed.

An exciting new funding bid on an entirely different scale

Arduino-computer power, with basic computer vision and a sub £50 price

We talk a lot about the importance of understanding the technologies used in autonomous vehicles, and regularly highlight the opportunities that model cars present, such as demonstrators running on Elektrobit Robinos, Raspberry Pi powered Formula Pi robo-cars, or smaller models using the Arduino processor platform.

In fact, we are very vocal about their value as a training platform for the next generation of ADAS and autonomous vehicle engineers.

Where else can you develop your own driverless car, using the software and hardware tools available on any budget, and take the same developed algorithms and machine learning to apply with very little additional work between a toy on your desk and a car on the road?

Formula Pi autonomous cars – a valuable training platform <£200 per car.

And so, after much preparation, we are ready to announce a very special partnership that fulfills the UK government’s most pressing investment programme for the upcoming competition of testing infrastructure, facilitated by Innovate UK and KTN – the Knowledge Transfer Network.

We have a very important alternative view which we feel represents good value for money to the taxpayer, as well as a realistic approach which

On Tuesday of this week, I attended a special preview event, meeting with the head of innovation at Innovate UK, as well as members of the team from the government policy unit, the Centre for Connected and Autonomous Vehicles, and the team from the newly formed ‘CAV Hub’.

We recently interviewed Ran Zhao, creator of this Raspberry Pi powered car running computer vision for clear-path navigation and road sign reading capability.

A few days later, Business Secretary Greg Clark launched the first competition to access funding from the Government’s £100m investment programme supporting the creation of test facilities for connected and autonomous vehicles on 30 March 2017 at the Society of Motor Manufacturers and Traders Connected Conference in London.

The programme, which is being match funded by industry, to take the total spend up to £200m over 4 years, is being launched as a grant through a series of funding competitions. The first competition announced today will allow bids for an initial share of £55m of the test bed funding.

The Big Bid

The highly controlled road environment is privately owned and ideal for autonomous vehicles.

Working with our good friends at Babbacombe Model Village, whose enclosed development has developed into a comprehensive range of built infrastructure, we plan to put in a bid for approximately one twelfth of the competition category value, since the Village is 1/12th scale.

Construction work begins for a specialist testing facility for 1/12th scale cars

By doing this, we will deliver exceptional value for money to taxpayers, as well as provide world-class training and evaluation facilities for autonomous vehicles and the engineers that develop them.

The location has a variety of road infrastructure projects underway, connections with scale-model transport hubs, all-weather operation, street furniture and signage to DfT specification, and on-site support facilities that include design and build workshops, electricians on site, office space to let and a very positive public opinion of the facility, thanks to its long history as a family friendly tourist attraction.

Local residents are watching the plans with interest

In fact, we can think of no better venue to host exciting new businesses looking to scale-up.

After 5 decades of development, the facilities are world class.  All the roads and transport hub areas are privately owned, so they are easy to close for small scale experiments.  As with all test sites, the nature of the venue means that occasionally wild animals do venture on to the roads, so engineers are encouraged to be present near cars at all times.

Read the full press release.

 

 

 

 

 

 

 

Devon tourist attraction aims to build driverless vehicle test facility – in miniature

April 1st 2017.

Babbacombe Model Village (www.model-village.co.uk), in Torbay, Devon, has joined forces with a driverless technology training startup in an innovative bid for grant funding from the UK Government.

The funding will be used to upgrade the road infrastructure at Babbacombe Model Village, to provide facilities to test driverless vehicle technology, as well as provide charging facilities for companies also developing small low-carbon vehicles.

“The biggest benefit”, says General Manager Simon Wills, “is that we can offer everything that the big test facilities can offer, but at 1/12th of the cost… that’s great for all the companies working on the technology, but it’s also great news for the taxpayer, and could really help us attract business from overseas.”

International appeal

The Model Village, which has stood for more than half a century, boasts more than 400 buildings, and includes village and city settings, shopping areas and a variety of types of public transport in a small space.

Alex Lawrence-Berkeley, of driverless technology training company Self Driving Track Days (www.selfdrivingtrackdays.com), said he was excited about the project’s potential. “What most people across the industry recognise is that developing, and more importantly testing, driverless vehicle technology is really complicated.”

“We regularly work with companies developing technologies used in driverless cars around the world through our vehicle perception training and events, and those companies are crying out for a lower-cost approach. The great thing about a lot of this technology is a considerable percentage of it can be developed and tested on scale-model vehicles.”

“UK government is investing heavily in this area, so we are really pleased to be able to help the taxpayer achieve the best possible value for money by taking a fresh approach.”

“We are excited to be working with Simon and his team on this project and welcome feedback from companies in the supply chain on how we can work together,” concluded Alex.

“We were really pleased when we finally identified a partner that understood what we were trying to achieve”, said Babbacombe’s General Manager Simon Wills, “and the team at Self Driving Track Days has great connections across the driverless technology industry globally, a really great community of supporters in the UK, as well as the experience in going through the fund application process.”

Technology applications in the built environment

Babbacombe Model Village already boasts a variety of environments which would cost tens of millions of pounds to replicate at full scale, and has a range of support facilities which make it ideal for new startups, notably small companies looking to scale up.

“We have office space and workshops already in place to complement the planned road testing environment and unique infrastructure needed for this sort of project, alongside full accessibility support, catering, on-site design and manufacturing facilities, and a 4D theatre to provide support for the virtual testing that will also be part of the project,” said Wills.

Tourist attraction continues

Babbacombe would continue to operate as a tourist attraction if the funding bid was successful, and provide its facilities outside the main tourist season – when vehicle testing is at its peak.

Until then, it’s business as usual. “We are gearing up for a busy Easter holiday period right now, so will be looking forward to the first big crowds of the year, “ said Wills, “we’d be delighted to hear what our visitors have to say about this future project.”

Further information about the project can be found at www.selfdrivingtrackdays.com or www.model-village.co.uk

[ends]

Notes for editors:

Babbacombe Model Village (www.model-village.co.uk) is a Tourist Attraction located in Torbay, Devon. Established in 1963, the village portrays English life and culture over the last 6 decades, having been open for 54 years.  It boasts 413 buildings set in 4 acres of gardens and has an estimated population of 13,160.

Self Driving Track Days is run by Sense Media Group (www.sensemedia-events.com), a Surrey-based startup specialising in vehicle perception events and training.  Its event series include Self Driving Track Days (www.selfdrivingtrackdays.com), the world’s first events providing test track facilities for driverless vehicle technologies, specialist training, meetups on driverless and autonomous vehicles around the UK and AutoSens (www.auto-sens.com), the world’s leading vehicle perception conference.

The CAV Testing Infrastructure fund being applied for was announced in the Chancellor’s Autumn Budget in 2016 and will be delivered by Innovate UK (www.gov.uk/government/organisations/innovate-uk), with policy and scope guidance from UK Gov’s Centre for Connected and Autonomous Vehicles (www.gov.uk/government/collections/driverless-vehicles-connected-and-autonomous-technologies), and was publicly launched with an industry briefing event on March 28th 2017.

Further enquiries

Media enquiries can be directed to:

Alex Lawrence-Berkeley: [email protected] Mob: 07757 647199

Simon Wills:  [email protected]

Filming at Babbacombe can be arranged with Simon for Friday 31st March. Interviews can take place in Devon (with Simon) or London (with Alex) or remotely via either with Skype, etc.

Driverless cars for small change?

Driverless cars for small change?
External sensor suite and processing: Raspberry Pi, power bank, Pi camera and Photon Cannon… or torch.

After stumbling across a YouTube video of a toy car made into a driverless vehicle, I contacted the creator, Ran Zhao, to ask him a little more about how he did it…

Ran built the car and developed a small convoluted neural network running on a Raspberry Pi computer, trained using about 200 labelled grey-scale images.

Road sign recognition working in poor light conditions at only 10fps
Path following, showing footage from the on-board Raspberry Pi camera in the top-right.

“The data has been extended so the network can handle never-before-seen road conditions and even drive in the dark. I got slightly less than 10 fps with both neural network and traffic sign classifier running at the same time.”

The video contains four scene types, driving to follow two variations of a marked path, and again using only torchlight, and stop sign recognition in daylight and torchlight as well.

“The play speed of the first scene is adjusted to be 3% faster just to synchronize the two videos. The rest scenes are of normal play speed.”

Do you have any training in this area, or is it part of your job?

“I just did it for fun. This is not exactly what I do for a living. My training is mostly done on YouTube and Google search.”

Tell us about the hardware and software…

Under the hood: Interfacing with the steering actuator and motor control

“The hardware is very simple, a toy car, a Raspberry Pi, a camera, a DC motor drive, a USB power bank and a 9v battery.  Tensorflow and OpenCV are the most popular tools for deep learning and computer vision. The alternatives would be software like Caffe or Theano.”

You’ve managed to keep the cost low for the project, how many hours did it take?

“The Pi and Pi camera cost about $45, the car was $10 and the USB power bank was about $8, so the overall cost is more like $80.

Road sign recognition in daylight

I built everything including hardware modifications and software coding in about three weekends. Most codes are from my old projects and I just reassembled and slightly optimized them so they didn’t overload the Pi.”

Are you planning a Mark 2?

“I am actually planning on Mark 2 which would be a self-driving and obstacle avoiding vehicle using deep reinforcement learning.”

What advice would you give other people learning about this technology?

“My advice to people who are interested in this technology would be ‘get your hands dirty’. The whole ‘deep learning’ may have been a mystery to many people but they are actually incredibly simple. From my perspective, the only way to learn and master would be to practice because that is what deep learning is all about. ”

Watch the video

Free driverless vehicle test track facilities

Driverless vehicle test track

We are taking over a large part of OAMTC Zentrum Teesdorf, pictured above, a large driver training facility near Vienna.

Test track features

We will have 3 outdoor test track areas including 200m acceleration / deceleration area, small autocross circuit (about 1km long with a variety of features including chicane, slalom and straights), and a figure of eight skidpad, all laid out to SAE / Formula Student specifications (most areas will be marked with cones).

All areas will be marshalled (as indicated by red triangles).

This will be free to use for any organisations (including Students / Universities) with autonomous or driverless vehicles to test.

Vehicles must have a back-up driver in the vehicle if in use.

Please note that this activity is not endorsed or supported by SAE or its affiliated organisations in any way.

FSD team registration →

What is LiDAR and why is it so important to driverless cars?

What is LiDAR and why is it so important to driverless cars?

If you’re an enthusiastic consumer of news about driverless cars, you will know that the almost universal hardware seen on most research and development vehicles is something called LiDAR.  But what is LiDAR and why is it so important?

Definition

LiDAR stands for light detection and ranging. It’s a name that hides what LiDAR really is, so let’s walk through those initials.

Li = Light. What most people don’t realise is that this light is infrared, invisible to the naked human eye, and emitted from the unit in a narrow laser beam.

D = Detection.  The unit looks for signals bouncing back from objects nearby, typically from 1m up to perhaps 100m away

A = and…

R = Ranging. Understanding the distance to objects in that 1m-100m bracket is really important, and this is more or less detailed, depending on the resolution of the unit and range.

Uses of LiDAR

LiDAR’s first and most established use has and is likely to remain surveying, whether from the ground or air (mounted on planes or small drones).

Applications include urban planning, countryside management, livestock counting, heritage and conservation, post disaster damage assessment, space (early versions of LiDAR were used to map the moon), even archaeology of sites to discover lost and buried buildings.  One very smart use is in forestry, where LiDAR can be used to measure the volume of unharvested timber on a hillside to negotiate a sale price before a chainsaw is even started!

It’s highly effective at creating a three dimensional map of an environment, whether man-made, natural or in-between, and updating that 3D picture several times a second.  Human vision performs the same function, but achieves it in a different way – your brain calculating the difference between your left eye and right eye to understand the distance and relationship between you and those objects in 3D space.

In-car view from a recent demo using a Velodyne LiDAR, showing various features & problems with LiDAR

Weaknesses

As with any system which relies on receiving a signal travelling in a straight line, LiDAR has one important vulnerability. It can’t see behind solid objects. That sounds obvious, but if your car has a single LiDAR unit and a cyclist pulls alongside the car, a comparatively large area is hidden from view. If a bird were to land on your car roof next to the unit, you could temporarily lose half your car’s useful LiDAR data!

Interference

Atmospheric conditions and other LiDAR units could, in theory, interfere with infrared laser signals.  Practically, however, both of these can be filtered out by software within the system, which is programmed to respond to a very slim frequency of infrared light, returning in a straight line to the unit.  If, in the exceptionally unusual event of a pulse clashing with a signal from another vehicle’s unit, the LiDAR can change frequency and carry on as normal within a tiny fraction of a second.

Cost

LeddarVu: a solid state LiDAR unit made by LeddarTech

Much is made about the cost of LiDAR units. High resolution units can cost upward of £10,000. These are reliable, and robust, and provide very high resolution data in a 360 degree circle around the car (if mounted on the rooftop, as it often seen on research vehicles).

Google have developed their own system and brought their unit cost down to around £5000, still providing high resolution data.

That cost per unit is far too high to be practical on a production car, so car manufacturers will turn to ‘solid state LiDAR’.  This is typically a lower resolution*, easier to manufacture and far cheaper alternative, with a narrower field of view.  The key benefit of this approach are individual units cost 95% less, between £200 and £500 currently, and will fall further in the next couple of years to around £50.

[*Much higher resolution sold-state LiDAR units are in development in 2017, which will generate more data points and higher resolution than moving units, so it won’t be long until solid state becomes dominant.]

Resolution

Just like cameras, LiDAR units need good resolution – that is enough data mapped in 3D, in enough detail, in order for software to interpret what objects are.

This is easy for us humans, but drop that resolution to a fly’s eye and is gets very difficult. Because the infrared laser beams are emitted in a widening beam from the source, the resolution close-by is great (enough to see facial contours and fingers), but far away, it’s dreadful (is that a lamp-post or a pedestrian?).

That, among the other issues cited above, is why LiDAR is one of several other types of sensor that is deployed in driverless cars.

The beams sent out from the unit bounce back when hitting objects and after mapping into 3D space, generate a ‘point cloud’. This is a complex group of points (depending on resolution, sometimes several million points created every second) which must be interpreted, usually into polygons. A polygon is a simpler shape to process, because – like the difference between a chocolate box and a wardrobe, the size, shape and distance make it easy for a computer to interpret and categorise.

LiDAR or not LiDAR?

One debate that rages is whether or not LiDAR really is essential to driverless cars or other autonomous vehicles. Tesla for example, think not (though their sister company Space X does use it on its rockets), as do a few other small suppliers, who choose to focus on cameras as being the main ‘eye’ for self-driving vehicles.

On one hand, that allows them to focus their efforts on other sensors (usually cameras, but also perhaps Radar as well) which are far more established, cheaper to buy, and have many more professionals in the talent pool available to do the work.

On the other, this means that rather than developing their software algorithms to interpret a rich 360 degree field of 3D data (which might be easier) they are restricting the robustness of ‘validation data’ from an additional sensor.

The debate will continue for at least another decade, until we have enough variety of driverless vehicles to understand the safety differences between the two approaches in real-world production vehicles.

In the short term, it’s not something you need to worry about… unless you’re building a driverless car, of course.

Autonomy vs Petrolheads

Autonomy vs Petrolheads

Hosting many events, we are fortunate to attract some really interesting people, whether they are already established in the driverless technology sector or moving towards it.

This guest blog is provided (with our gratitude) by Basileios Mavroudakis – a vehicle system dynamics (VSD) and controls expert, with a broad background on  simulation techniques (Basil holds a PhD on Simulation in F1). Basil recently worked for supercar manufacturer McLaren on a top secret project… 


Recently Roborace made history running two autonomous racing cars in Buenos Aires. According to reports one devbot crashed but the other managed a respectably quick lap clocking 186 kph (see the team’s tweets for updates). The racing incident is not a bad thing at all; rather a proof that Roborace’s visionaries really push the (performance) envelope.

A few months ago, Forbes published a very interesting story on VW’s autonomous race instructor, a performance oriented ADAS made feasible thanks to the advances in technologies similar to the ones developed by Roborace and, of course, Audi (the pioneering RS7 being the first autonomous car to decently compete against a professional racing driver), while Ford has recently filed a patent for an ‘Autonomous Racecar mode’. Meanwhile, the academic research community has been in the forefront of such developments, from Stanford’s pioneering Shelley to the announced autonomous Formula Student Germany for 2017.

Of course an autonomous controller’s ability to maintain control of the vehicle on the limits of the performance envelope and beyond is the key enabler for massive gains in active safety. However, there is also a more relevant to us petrolheads pattern to identify in the aforementioned schemes: the provision for performance-oriented operating conditions.

People who know me know that I love driving. I enjoy a safe, sane and spirited drive when the road conditions permit, with a responsive four or two wheeled vehicle. And I relish the pleasure of exploiting the vehicle’s limits in a controlled hence safe environment (i.e. track days). Do I fear that the rise of robotic cars will deprive me of the joy of driving? Au contraire! I am very confident that autonomy will relieve me of the mundane task of driving under unpleasant circumstances and also be the enabler for a number of technologies that will retain or enhance the pleasure while driving under certain others.

The dynamics of high performance vehicles have been my core scientific interest ever since I was a student. The optimisation of their performance under all conditions has been the prime focus during most of my professional life and is yet to bore me (I doubt it will ever) while the human factors aspects of the driver-vehicle interaction, often cited as subjective attributes, have dominated my free time since I learned how to drive and ride.

Fig 1. An autonomous drift controller’s function developed by the author

During my 15 years of engineering involvement and almost 25 on/in the driving seat I have introduced, developed and experienced a number of performance enhancing technologies; some solely focused on going faster around the track, others on providing a higher level of engagement and most of them both.

I can understand the reservations expressed by some fellow-petrolheads, especially from a puristic perspective: Where do you draw a line between aids and skill? Where, indeed? Is the use of racing ABS to be condemned? Of paddleshifts? And if so, then why not strip gearboxes of the synchromesh too in order to increase the required skill level?

I really don’t have an answer and am inclined to believe that this is a matter of flavour: some people will prefer to allocate more control effort to the said tasks while others (me included) to focus on dynamic states like side-slip angle and mu variations. I am also willing to accept that some times enhancing the performance envelope can have a negative impact on engagement: adding lap-time improving technologies can introduce layers of complexity resulting in a less transparent response from the vehicle and, of course, pushing the limits upwards usually makes for a less approachable car on the road, one that may require irresponsible risks to come alive.

And this is why I am so excited about the opportunities arising from the developments in autonomy: what’s not to like about a future performance car boasting a swollen performance envelope, the natural-feeling ADAS to help drivers go faster and hone their skills and a super safety-net to keep everyone, including the authorities, happy?

I acknowledge the excitement resulting from exploiting limit-handling close to and beyond our control bandwidth without any aids. I am afraid that this might become (if not becoming already) an exclusive pleasure for the privileged few.

What do you think?

Pushing the boundaries for driverless vehicles

I recently spent some time with Paul Fleck, President of DataSpeed Inc., at a closed circuit in France to talk through his company’s technology.

One of the technologies developed by his Michigan-based company has echoes from an earlier part of Paul’s career.  With a background in electrical engineering as a drive-by-wire specialist, Paul cut his professional teeth in the 1990s Ford-run F1 outfit Benetton, working alongside Michael Schumacher in his heyday.

And while fly-by-wire technology was later banned in F1, its evolution has had a lasting impact and is now regularly found in robotics and cars.

DataSpeed has taken the drive-by-wire approach to another level, and provides a multi-point interface to specific donor vehicles, in this particular case, a Ford Mondeo (also known as the Fusion in the US) which can then be used as a base vehicle for which autonomous vehicle software and algorithms can be developed.

Other companies produce similar hardware for other vehicles, of course, and we’ll be looking at those when the opportunity arises, but I’m writing this to put this vehicle into context – this is what you start with before you have an autonomous vehicle: a blank canvas research platform.

Some of the hardware in the boot / trunk of the car.

Three obvious bits of equipment sit in the front of the car: a laptop, small touch-screen unit and – somewhat interestingly – an XBox controller. There’s quite a bit more tucked out of sight behind the back seat, and some sensors on the roof as well.

“The touch-screen display enables us to turn power on and off our various electronics, engineers love because if something doesn’t work, they can reset it and it works again magically.”

“The laptop is acting as the computing platform bringing the joystick commands and converting them to a protocol on the CAN network, which in turn interfaces with our drive-by-wire system on the vehicle and that’s how we’re able to control it – using an Xbox controller.”

The laptop has some of DataSpeed’s own software running on a version of Linux called Ubuntu, along with RViz, part of the ROS system often found in robotics development.

Paul enables the computer control by hitting two buttons on the vehicle’s steering wheel, and a number of codes flash across the laptop’s screen. Paul flicks a few buttons on the Xbox controller, showing the brakes, throttle and steering control while the vehicle is parked, before we started to drive off.

“Normally, one person shouldn’t be the safety driver and controller.  How we’ve developed the system means that we can easily hand control back to the driver, or the driver can take control back whenever they want.”

Paul pointed out that the use of an XBox controller isn’t a core feature, it’s there to show off the control systems and interface with the actuators already in the vehicle.

“We’re marketing to engineers that need a development platform, they’ll put their computer in the back, a sensing system and then some software, then they can get started”

The handover problem sounds small and simple, but it’s quite complex.  Even Ford recently announced they were scaling back plans for partially autonomous vehicles and instead focusing efforts on fully autonomous vehicles, partly before engineers were relaxing too much to be able to regain control of the vehicle safety when prompted.  This reflects findings from various studies run in simulators at the University of Southampton which highlighted the delay between alerting the driver and successful handover as ‘alarmingly slow’ – but these are human problems, not engineering problems… hence Ford’s interest in removing the human completely.  No human, no handover problem.

“In 3 seconds, you can travel 200 feet, at speed.. “ says Paul, “remember, if the autonomous computer is handing back control it means that it does not understand what’s going on, and what you’re asking a human to do in that period of time is: 1) switch context from relaxing crossword to driving a car; 2) understand what the threat is, and then; 3) do an evasive manoeuvre.. and I’m not sure humans are capable of doing that.”

It’s also clear from the use of a computer game controller that it wouldn’t be a great stretch to find control methodologies in computer games to work with this system as a route to simulating environments and developing algorithms, for example Microsoft’s recent release of sensor simulation software for drones, or OpenAI’s release of a driverless car simulator in GTA V.

Here’s a contradiction in terms: How can you be a petrol-head and passionate about autonomous vehicles?

“Well, it’s still going to have an engine… when you’re a petrol head, you’re probably more technical,  into new technology and there’s a lot of new technology in autonomous vehicles.”

“I’ll be honest, this car right here is probably more sophisticated than an F1 car right now, it’s hybrid, highly engineered, there’s a lot of complex electro-mechanical systems, a lot of technology just in this vehicle. Unfortunately, I think the UK would be further ahead in ‘by-wire’ today had the FIA not banned that, because all that technology was being developed in England – I was involved with that over 20 years ago.”

At this point, Paul suggested we switch into navigation using GPS, showing a readout from the pair of NovAtel GPS receivers mounted on the roof, indicating latitude and longitude, as well as the standard deviation (i.e. the level of accuracy derived from the signals before further processing or sensor input) of about 30cm.

GPS has weaknesses, between electromagnetic atmospheric disturbances, and particularly in the UK, woodland and rain. It also needs a lot of computing power.  Ever wondered why your satnav gets hot, or your phone eats battery power so quickly when you’re using it for navigation? It’s all about the maths.

You see, while GPS signals travel at the speed of light, they are coming from a long way away. By the time the signal has traveled the distance from multiple transmitters to one static point, both the direction and distance to the target will have shifted.  Add to that the fact the Earth is moving… and the receiver unit is moving too, and you start to wonder how anyone made it work in the first place.  Doubling up on that level of calculation several times to get the accuracy needed thus becomes very difficult.  That’s not all, there’s more…

“GPS is a line of sight modality, it’s not just vulnerable to bad weather and trees.  In cities you get the urban canyon effect, so you don’t get a lock on enough satellites to get a path because the signal can’t go through buildings, which is why we also add inertial measurement – an IMU” explains Paul.

A connection to the local cellphone network allows for corrections to be made to the received GPS signal.

“For this circuit, if I just wanted to drive around, that’s more than enough.  If we can hook up to the cellular network, and use the RTK (real time kinematics) we can get that down to around 3cm, but you’re probably not going to use your GPS for on-road detail navigation, you’d use your LiDAR and camera systems to see curb details”

To demonstrate the benefit of the LiDAR, on this vehicle a Velodyne puck mounted in the middle of the roof, we parked up by the edge of the circuit, near some trees, a tyre barrier and on old gateway.

LiDAR uses infrared laser-focused beams which shoot out, invisible to the naked eye, in a straight line. Because these are photons (albeit not visible to humans), they can be blocked, and an obstacle in the way will cast a shadow, blocking from view anything behind the obstacle.

The Velodyne LiDAR puck is a common sight on driverless vehicles in R&D, but probably won’t be seen on production vehicles in the same form.

To create a map, whether for following a path in real-time or recording a 3D environment for future use, you need obstacles fairly close-by for the laser to hit.  That’s why we’re at the edge of the circuit, rather than in the rather flat, featureless centre. In an urban environment, there would be more buildings, pedestrians and objects in general.

 

“For the LiDAR to work for SLAM (simultaneous localisation and mapping), we need things for it to see – we can see the wall there, even the foliage on the tree – if we humans can see that and understand it, then a computer can.  We can record all this as point cloud data in a file, at the same time we can drive in a small circle in this area – relative to the objects around us.  We’re also recording the speed as well.”

This video grab below shows output from the LiDAR displayed on the laptop. The arrow heading in front of the car is the shadow cast by me, acting as an obstacle to the infrared laser beam – behind that shadow, everything is invisible.  This is the main reason why multiple sensor systems are used.  The GPS readout is visible on the left of the screen.

Paul replayed the recording, and I watched as he withdrew his hands from the wheel and feet from the pedals and we moved away, the laptop in front of me clearly showing the point cloud that we’d recorded just moments previously, with tree branches, barriers and cones all mapped out in 3D.

The computer shows the computed path ahead of the car icon on screen as a purple line, constantly calculating the robotic commands needed to control the car against the inbound data stream from the corrected GPS data.

The mapped path was displayed on the laptop as a thin purple line drawn out ahead of a small car icon, with a few blue blobs immediately in front of the icon to indicate the exact position that was being calculated at that moment.

“There’s a lot of maths to do this properly” Paul highlighted, as the laptop continued to warm my knees. “But take the GPS, LiDAR, put a camera on, tie in your GPS to a map database, you’re ready to start developing your fully autonomous vehicle.”

OK, so we’re all set?  Not so fast. Actually, you need more than just a development platform.

“You need engineers – Mechatronics, by-wire systems, design and build of sensor systems, mechanical engineers for integration, probably the most difficult is the algorithms – the type of software that’s being developed is not web development… it’s highly maths based, and implementing high level maths as algorithms in a programming language is hard.  In a development environment, we’re using ROS and certainly Python, C and C++, but for production you’re always going to be looking at safety rated operating systems, something like VX Works, an i7 platform with an NVidia processor for vision, safety rated and validated – the software is the most difficult part for both R&D and the transition to production”

Next up on our ranging conversation was testing. So much has been made of building physical environments for testing vehicles, every country has test tracks and many manufacturers and even suppliers do too.

“Doing testing like this is expensive.  You can only run so many scenarios. Putting together simulation environments and doing 100,000 variations and use-cases is very important, you’re not going to be able to do that manually.”

“Engineers make assumptions, we don’t design anything to fail, but look at Apollo 13 or the Shuttle programme, it’s the edge cases that crop up and cause problems.  We’ll minimise bad things happening with autonomous vehicles, I don’t think you’ll ever eliminate them, but overall driving will be safer.”

The average age of cars on the road is about 10 years, even if all vehicles are made autonomous ‘tomorrow’, it’ll be ten years before we get 50% penetration.  But that won’t happen for at least a decade, so already we’re at 20 years and that’s only half. For the other half, at least another decade, taking us to almost entirely autonomous by 2047.

I asked if Paul could envisage a future that we’ll only be allowed to drive on a track, an exotic past-time like we see flying today?

“I drive an Aston Martin, I don’t want that to be autonomous, I want the experience of driving a vehicle like that. I don’t want the computer to control it, I want to be the driver!”

Nobody really knows.  We can all gaze wistfully into the future, but any type of improvement to safety on roads must be taken seriously.

“Within the next 10-20 years we’re going to see a large amount of transformation in the transport and mobility industry.  Look what Uber and Lyft have done to the use of the Taxi and the Taxi driver, with ride-sharing.”

“One of the things in the automotive industry, what the OEMs have to pay attention to, is that utilisation of the vehicle is around 5%, something with a value of tens of thousands is just sitting in the driveway.  Uber drivers aren’t just using their own cars for themselves or their rides, they’re opening the opportunity for fewer cars in the future.”

The final demo of the afternoon was a high-speed circuit, and we recorded a circuit at a fairly brisk pace – certainly not race speed or particularly close to what anyone would call a racing line, but if you happen to have access to a closed circuit, it would be rude not to put your foot down just a bit.

Paul made a point of saying that the car’s replay of the lap would highlight problems that many driverless vehicles have, and that I highlight above as well: compute power.  This was, he reminded me, a ‘blank canvas’ R&D platform, not a supercomputer-equipped production model.

Remembering back to the path prediction line plotted by the laptop, each time we drove at enough speed to catch up with the planned path (as calculated against the GPS plot) the car lost accuracy, but remember, we’re in a far from ideal environment and only using one of the systems – many more would be essential for a robust road or race experience.

Deliberately pushing limits of a vehicle’s technical capabilities is bound to end in problems, but that’s the nature of R&D – and it’s very different to what anyone should expect to see in production vehicles.


Paul will be bringing his vehicle to the Self Driving Track Days events in April (UK) and July (Austria).

If you’ve found this article interesting, you can find out more and take part at one of our upcoming meetups, or attend one of our one-day introductory workshops.

Paul will also be on-site during AutoSens US, which is taking place in May at the M1 Concourse in Detroit, Michigan.

AutoSens is our flagship project, and the world’s leading vehicle perception conference and exhibition. Aimed at engineers already working in industry, it sits alongside the Self Driving Track Days project and provides us with great access to industry experts around the world.

Inertia – Another essential ingredient for driverless vehicles

Inertia – Another essential ingredient for driverless vehicles

I know of at least three driverless R&D vehicles which rely on an inertia-based sensor to act as a back-up or supporting technology to other methods of path plotting and navigation (usually a combination of GPS and Lidar, depending on the hardware and software deployed). 

While the systems found in commercial jet liners might cost tens of thousands, and boast extraordinary accuracy over thousands of miles (and have a price tag to match), inertial measurement within a smartphone might have coarser accuracy and cost a few pennies.  Needless to say, the units often found in autonomous vehicles sit somewhere in between – both in terms of cost and resulting accuracy.

As we know, it’s important to have a back-up system to provide a fail-over in case the first one or two fail, or don’t have enough data – that’s the same for navigation as it is for obstacle avoidance or dealing with difficult weather or terrain – but inertial navigation is far from perfect. 

Special thanks to Tiziano Fiorenzani, UAV Design, Guidance and Control Expert, who kindly gave us permission to share his blog on how inertia measurement works.  This is part of a mini series of personal blogs he’s producing about the main sensors that are commonly part of any drone – you can get more updates by following Tiziano on LinkedIn.

What is an IMU?

IMU stands for for Inertial Measurement Unit, and in fact, measures “inertial quantities”, as accelerations and angular velocities. Those quantities can be used directly for  an automatic feedback control loop (eg, the gyro that stabilizes RC helicopters) or we can process their data out and estimate the Attitude (roll, pitch, yaw or quaternion).

What is inside an IMU?

Usually an IMU consists of the following sensors:

  • 3-axis accelerometer: measures the accelerations along its axes
  • 3-axis gyroscope: measures the rotational velocity around its axes
  • 3-axis magnetometer: measures the local magnetic field components along its axes

This setup is made out of 9 sensors (3 sensors on 3 axes), so it is generally referenced as 9-DOF IMU.

Accelerometers

Accelerometers sense all the accelerations applied to them, even those due to vibrations or manoeuvres. Isolation is of primary importance as well as an accurate calibration.

There is a simple way to figure how an accelerometer works. Take a scale and put a weight on it, say 1 kg (or 1 lb if you like, but I’ll stick with the International System). The scale will show you 1 kg, because the object’s mass of 1 kg, subject to the gravity acceleration of g = 9.8 m/s^2 (1 g), will experience a force of F = mg = 9.8 N (Newton), that is 1 kg force. Easy enough. Now, if you grab the scale and suddenly moved it upward, you would read that the same object has grown heavier. What does it mean? Well, the scale measures the vertical component of the forces applied to it: naming a the upward acceleration we impressed, the same mass m multiplied by a higher acceleration (g + a) results in a higher force F = m(g+a).

The sensor converts the acceleration into a voltage, which is later translated into a binary number that an autopilot can understand.

There are different types of accelerometers, though the most common are Capacitive and Piezoelectric, which basically measure voltage variations due to the sensor’s deformation. Check the references for more details.

When an accelerometer is used as inclinometer, the gravity vector components at rest are measured in order to evaluate the tilt angle as:

You can verify the effectiveness of an inclinometer with your own smartphone: install an app and verify if a painting has been hanged on the wall on a level plane. It won’t be straight if you move your phone, as the inclinometer is affected by the extra accelerations you cause.

Gyroscopes (usually shortened to ‘gyros’)

Gyros measure the rotational velocity around their axis.

Let’s simplify how it works by imagining a tiny mass, connected to a housing by micro springs and forced to oscillate at a constant frequency. Then imagine the housing connected to the frame by traversal springs.

Any rotation of the system will induce Coriolis force in the mass, pushing it in the direction of the second set of springs (if you are not familiar with Coriolis, check the references below, but basically is a force that applies to something that accelerates in a non inertial frame, like the Foucault pendulum).

The displacement is measured by sensors that are along the mass housing and the rigid structure. As the mass is pushed by the Coriolis force, a differential capacitance will be detected.

Even if the principle is different, and is not based on Coriolis force, one way to visualize a rotational velocity is the swing ride (even though when you kick your partner out he will experience Coriolis force…).

The faster the swing spins, the higher you go. That is because of that fictitious force named centrifugal, that is actually the chain preventing you to fly away. Your altitude is basically a measure of the ride’s speed.

Measuring the rotational velocity is of primary importance, as it can be integrated to obtain an estimate of the actual tilt angle and represents a great signal for feedback control loops. Unfortunately, gyros are noisy and their output at rest varies with the temperature.

Magnetometers

Magnetometers… I hate them! Seriously! But, they are the only sensor that can give your heading. A magnetometer measures the local magnetic field components and you can compare those values with the World Magnetic Field Model in order to estimate the attitude, and thus the heading respect to the local magnetic North. Why do I hate them? Because almost everything affects the local magnetic field… electric lines, Sun activities, internal wiring, other sensors, transmitters or even the CPU… That is the reason why you want to put your magnetometer as far from any interference as possible. Still what you measure is your attitude respect to the local magnetic field: the local declination is added up in order to obtain the heading respect to the true North.

If you want to have an idea of how the magnetic field is easily affected by external disturbances, use a compass or install an app on your phone and try to walk in the office, close to the computers or metal objects: you’ll see the needle going nuts.

Miniaturized magnetometers are based on the Anisotropic Magnetoresistance phenomenon: basically the material changes its internal electric resistance when exposed to a magnetic field: check the references for more details.

Calibration

Sensors would be useless without a proper calibration. For accelerometers and magnetometers you would probably do it once in a while, while gyros need to be calibrated every time before starting.

Why calibrate? Because the output signal is usually as:

measure = scale_factor*signal + bias + noise (m = sf*s + b + n)

The real signal is multiplied by a scale factor and then corrupted by an almost static bias value and a random noise.

While the noise can be easily filtered out, scale factor and bias are almost constant, thus they must be evaluated properly.

For the gyros, the bias is easily estimated before flying, when the vehicle is assumed to be still on the ground, so the actual measured rotational speed is only the bias component.

For accelerometers and magnetometers, the procedure is a little more complex, but basically you want to put your autopilot at different angles and estimate the best fitting sphere out of the measured quantities.

What have we learned

The world of autopilot surely is intriguing. We have just begun to scratch the surface and we still have a long road ahead. IMUs are fundamental to Robotics and Drones just as much as smartphones.

References

SensorWikiaccelerometersCoriolis forceFoucault pendulumgyrosEarth Magnetic Modelaccelerometer calibrationmagnetometer calibration algorithm

50+ vehicle perception videos

We’ve written about who this project is aimed at, but for people already working in vehicle perception or ADAS, as the industry generally talks about, what we have to offer through Self Driving Track Days really is too broad.

For those image processing engineers, machine learning developers, data engineers and roboticists already working on advanced driver assistance systems and automotive research and development, we have a sister event: AutoSens.

Running in Europe (Brussels, Belgium) and the US (Detroit, Michigan) every year, the AutoSens vehicle perception conference is the leading conference and exhibition for engineers working in the vehicle perception industry.

The event generally sees around 300 professionals from across the supply chain taking part in dozens of conference sessions on photon-sensor based detection, signal fusion, environment processing, and deployment.

As an industry-only event, cutting edge technologies at component level are demonstrated and discussed – technologies which will be used in production vehicles of the future.  One example of this is the broad understanding that dynamic (i.e. moving, high resolution) LIDAR units are only suitable for R&D, and that the future lies with solid state LIDAR.  This has prompted much questioning in the mainstream media over when these technologies will become available – and yet half a dozen companies were displaying these as finished products at AutoSens in 2016.

What this told us is that there is a significant disconnect between what the industry is really doing and what the media are talking about, and this can only be improved by educating people and of course, the media.  At the tip of the hype cycle, there is a great hunger for news and not necessarily a great deal of background research or context for the innovation or idea.

Without trying to meet every person face to face (a near impossible task, despite our efforts in the UK with countless networking events) or read every article, the best way to do this is by making useful information available to everyone, for free.

So, we have uploaded every conference session from the event to AutoSens on YouTube, more than 50 presentations from car manufacturers and suppliers, universities, engineers, technologists, researchers, AI specialists and more – so everyone can have that knowledge for free.

Explore the videos and playlists on the AutoSens video portal.