One of the common goals of autonomous car manufacturers is to make vehicles with capabilities that humans lack. For example, a system that can simultaneously detect what is happening in every direction while instantly processing the information and predicting movements of other cars can react quicker than the average human. As several vehicle manufacturers and technology companies continue their competition to create the best self-driving car, Massachusetts Institute of Technology researchers are preparing to unveil a system that can see through fog.
Fog and Autonomous Driving Systems
Over the years, sensors and cameras have improved enough to make autonomous driving technology possible. With the combined use of these technologies, advanced self-driving cars detect pedestrians, objects, and other vehicles. They use a database of learned information to process gathered data, and they predict movements or react to unexpected movements of other vehicles and pedestrians accordingly. Also, advanced systems detect road signs and make necessary adjustments to speed up, slow down, or stop.
While advanced self-driving cars have performed well during tests in a variety of conditions, one factor that has been an obstacle for years is fog. The cameras and sensors of an autonomous driving system rely on a path that is mostly clear to function optimally. When there is fog, it scatters the light and makes an autonomous system function differently. One purpose of self-driving car technology is to exceed human capabilities. However, current systems often perform worse in foggy conditions than the average human performs. MIT’s new system may change that.
How MIT’s System Works
Researchers at MIT revealed some details about their new system earlier in March. In addition to having the ability to detect fog, it has better visual capability than the human eye. The researchers conducted a test to measure the improvement, which was an impressive 58 percent. While a human eye only saw up to a depth of 36 centimeters in thick fog, their system accurately resolved images at a depth of 57 centimeters in the same fog thickness. Since their system can perceive images accurately in fog and see objects at a greater distance, it may be the solution for developing self-driving vehicles that will not fail during bad weather.
A depth of 57 centimeters for visibility in fog may not seem far. However, a recent news bulletin from MIT explained the reason for the shorter visibility readings. For the test, MIT researchers used man-made fog that was far denser than natural fog. According to the bulletin, the estimated visibility depth for the system in natural fog would be between 30 and 50 meters. The number in that range would depend on fog density. As the bulletin emphasized, the major point of the fog study was that the system demonstrated greater capability than the average human, and even a system that performed the same as a human would have been a notable breakthrough in autonomous car technology.
To address the fog obstacle, the MIT team performed a statistical analysis of the variables of light reflection in fog. They found that the scattering process was predictable. After that, they trained software to accommodate foggy conditions based on their calculations. The new system incorporates principles of acoustics, and it blends interferometry with light detection and ranging. Interferometry keeps one beam within the system while firing another. To measure distance, LiDAR calculates the time it takes for a beam of light to return to a sensor.
What Happens Next?
Guy Satat was the lead researcher of the recent MIT project. He said that the calculations for the team’s solution were pretty simple. However, they provide more complex possibilities for autonomous car systems. According to Mr. Satat’s research, fog continually moves and changes. Current systems are not designed to deal with such complex and constant changes. Since autonomous systems rely on prior knowledge of specific situations, new and unique situations are challenging. However, the MIT team’s technology does not require prior knowledge of specific densities of fog. This means that the program has the potential to work in a wide array of foggy conditions.
In early May, Mr. Satat and the team from MIT will attend the International Conference on Computational Photography in Pittsburgh at Carnegie Mellon University. Mr. Satat’s colleagues include an MIT associate professor of media arts and sciences, his thesis adviser, and a graduate who studied electrical engineering and computer science at MIT. While they are at the conference, the team will present their research paper about the new system and explain how it works.