A new step forward in artificial intelligence for self-driving vehicles promises to improve their ability to operate in low-light conditions or in poor weather. The advance would fill one of the remaining gaps in the viability of autonomous vehicles for everyday use.

Self-driving cars normally detect traffic signs by identifying their shape or color using a camera. Existing methods have limited their ability to operate in dark or rainy conditions, and even obstacles like trees can make this task difficult. Autonomous vehicles have largely been limited to operating during daylight hours, or have been forced to rely on a human driver to take manual control in difficult situations.

Now, researchers from Sookmyung Women’s University and Yonsei University in Seoul have used a machine learning algorithm to identify signs by analyzing whether their reflectiveness matches a designated pattern. Their algorithm is able to analyze multiple sections of a captured image at the same time, an improvement on existing methods that can only analyze one section at a time.

The algorithm flags sections it identifies as a possible sign, which are then passed through a “convolutional neural network.” Modeled after the way people see and think, the network identifies shapes, symbols, and numbers to interpret the sign. Using the information that certain shapes and symbols mean certain things in a given country, it then looks for a number indicating a speed limit or other more detailed information. If it decides the image shows a traffic sign, the information is then passed on so the car can make the necessary adjustments.

The process was tested on images of roads from the US, Germany, and South Korea. It was able to accomplish the task quickly, and using a reasonable amount of computing power.

It uses a computing platform called DRIVE PX 2, created by NVIDIA, specifically for use by self-driving vehicles. Its speed and power is vital for allowing cars to quickly make sense of images containing multiple traffic signs. It should be able to accomplish this quickly enough to allow time for a car to react to the sign in a real-world scenario.

Autonomous vehicle researcher Kang-Hyun Jo, who was not involved in the project, said such a system would be essential for a self-driving car to navigate the complexities of a real world road environment.

“Autonomous cars should see and recognize arbitrary objects because we can’t guarantee what happens outside ourselves. To perform this task, it is so important to figure out and identify the information that directly endows the car with safe navigation.”

Leave a Reply

Your email address will not be published.

I accept the Privacy Policy

This site uses Akismet to reduce spam. Learn how your comment data is processed.