Five seconds ahead of time
People and their behavior are sometimes unpredictable. This can lead to unexpected and risky situations, especially in road traffic. Looking into the very near future could prevent this. You don’t need a crystal ball or the third eye for that. Cameras, algorithms and neuronal networks are the ingredients for more safety in road traffic.
Our brain is the computer that calculates our movements
Currently, Optical Flow allows us to jump five seconds into the future. Not enough for Warda Khan, Technical Consultant AI for Software Solutions at IAV: “Twenty seconds would be ideal. That would give enough time to change decisions for an action in traffic.” But the research is not there yet. That’s why Khan is looking for investors to support IAV’s research – or pilot customers to bring the process to series maturity.
But what is Optical Flow? And how can it help make road traffic safer? First of all, Optical Flow is a daily computing power of our brain that enables us to move purposefully. With every movement, an image of our surroundings is created on the retina of the eye, relative to our movements. These patterns of movement – the Optical Flow – within our field of vision help our brain calculate the following: Am I moving or are the objects around me moving? What distance exists between the objects and me and when do I reach them? How do I avoid a collision?
The images of the objects move across the retina at different speeds. Optical Flow enables us to make course corrections – and avoid obstacles. In road traffic, this means that people recognize the movements of other road users out of the corner of their eye and unconsciously take them into account when deciding on their own course of movement.
This computing power can be simulated and transferred, for example, to navigate autonomously driving cars – to understand the movement trajectory of objects outside the vehicle. Optical flow carries the time dependency information along with the presence of an object. Currently, IAV’s solution is based on camera feed from inside as well as from outside the car, this makes it easily integrable without additional equipment expenses.
Cameras recognize intention to act in real time
The basic idea for IAV’s Optical Flow project is to track in real time where drivers focus their attention. “We combine this information with images of the vehicle’s surroundings captured by cameras and GPS data. This allows us to see if the person behind the wheel is intending to turn right or left, or if they want to go straight,” Khan explains. “If he or she is about to make an unfavorable decision, we could give a warning signal – for example, if she is steering left, where she would collide with oncoming traffic.”
At the base of this solution, it contains real time gaze estimation of the driver. A neural network is trained to recognize whether the person being filmed is looking down, left, right or up by means of fixed orientation points in the eye. Integrating this information with intention prediction network gives a probability of where the person’s eyeball is likely to move in which direction within the following five seconds.
“When we combine both pieces of information, we end up with the direction of the gesture,” Khan concludes. This in turn reveals the intention of the person behind the wheel, who can be warned against making wrong decisions. A driver assistance system based on this method may save many potential accidents resulting from human error.