We caught up with Dr Lounis Chermak, Lecturer in Computer Vision and Autonomous Systems at Cranfield University. Dr Chermak is leading a workshop at Self Driving Track Days on 10 July on “From Robotics to Computer Vision: Self-Localisation in Autonomous Vehicles.” In this one-to-one interview we uncovered his involvement in designing a smart autonomous visual based navigation sensor, the most important technology he believes automakers need to get right and how he aims to “make the images talk.”
Which project has been the most interesting and rewarding one that you have worked on in relation to autonomy of systems for space exploration, autonomous platforms and human machine interaction?
Over the past few years, I had the opportunity to work on or supervise several interesting projects related to autonomy of systems. That being said, the one that I am the most attached to, is my PhD project where I had to design a smart autonomous visual based navigation sensor that adapts to different space robotic platforms while providing trajectory generation without pre-knowledge of the environment. This was challenging in many aspects since you need to take into account plenty of environmental and technical factors specific to the space context. Such as limited hardware, extreme illumination and temperature, etc. However, this was an exciting 3 year journey where I designed and developed everything from the algorithms, the software, selecting the sensors and implementing the hardware until trials and validation. This was an intense process but really enriching and rewarding experience.
On the road to autonomy, what is the most important technology that automakers need to get right?
It is difficult to point to a specific technology since autonomous systems are a multidisciplinary and bring along computer vision, robotics, control and guidance, sensor fusion, navigation and more recently machine/deep learning. The latter, has been quite disruptive by leveraging the former disciplines and enabling it to reach a level of performance that have not been achieved previously. This could not have happened without the emergence of affordable and efficient parallel computing hardware. That being said, even if we are advancing faster than ever, A.I. based solutions still need to reach a higher level of reliability. Most importantly, the main subject that needs attention right now is being able to sense and model any environment or at least its critical elements with the maximum of reliability, and this requires innovations in all the above cited disciplines.
You won the Selwyn Award by the Royal Society of Photography in 2017, could you explain more about the work that won you this Award?
The Royal Photography Society is one of the world’s oldest photographic societies. Since 1994, the Selwyn award is yearly awarding, to someone 35 years or under, who has conducted science based research connected with imaging. Rather than specific work it is the distinction related to the impact of my work had on the field of imaging on my early career.
I believe that when you get such awards, it is quite surprising and the first thing that comes in to your mind is: me? really? Actually, that was the case. Then you start looking back at all your achievements and try to find the rationale behind it. I then realised that I achieved quite a lot over short period of time whether on the academic side, or delivering on industrial contracts. In fact, my academic life is fast paced in contrary to what one can think about academia. There is always something happening and I always look to what needs to be done or what’s next rather than what was done. Obviously I feel proud and honored to receive this distinction but that it’s not an end. Indeed, I see it as you are on a good track – keep going.
You like to say, you love to “make the images talk”, can you elaborate on this and what your goals are?
“A picture is worth a thousand words“, for the human brain that is obvious, but not for a computer. Actually, the human can instantaneously interpret what is in a scene by identifying the objects, their functionality and states, the people, their actions, emotions, and even their intentions to a certain extent. On the other hand, for the computer an image equals hundreds thousands to millions of pixels. Each pixel is a number representing the captured intensity level of luminosity. Hence, an image is a sequence of numbers completely meaningless for the computer. My job as computer vision scientist is to make these numbers becoming meaningful for the computer by applying or creating algorithms that will use these numbers to realise some of the operations which are naturally done in our brains such as detection, recognition, labeling of objects and people, stereo vision, depth estimation etc. So in that sense I am making the images talk in order to reveal the information within, that will help me to achieve different objectives.
You have a great interest in education of youth, could you share more information on following the “Teach Your Own” methodology?
“Teach your Own” is the name of a book written by an American educator called John Holt. His main ideas are that children need to be provided with stimulating learning environment most importantly to teach them how to learn rather than what to learn. Indeed, learning driven by motivation and enjoyment is much more efficient in term of recall. Learning is also an ongoing process that should be continuous even when becoming adult. This is even more true in our time where skills need to be constantly updated to follow the fast paced course of technological advancements. The good news is that nowadays knowledge is everywhere and has never been as accessible as before. This is a trend that is meant to grow and offer people to learn at their own path all around the world various subjects especially with the emergence’s of MOOCs (Massive Open Online course). If John Holt’s ideas were disruptive in the 20th century, it seems that today a lot is in place to allow them to be implemented and offer an individualised, fun, and efficient experience of learning.
What are you most looking forward to about leading a session at the Self Driving Track Day next week?
I am extremely happy to talk about my field and share some of my experiences but learning is not a unilateral process it is rather a multilateral one. So I am looking forward to people’s feedback, I am also keen to hear about their experiences and motivations whatever their level of expertise. I think we are living in an exciting time where autonomous systems are really about to change society so I can only be pleased meeting other people sharing this enthusiasm.