So how do Autonomous Vehicles work? AVs create a map of their environment using a variety of sensors and cameras to monitor the positions and speeds of nearby vehicles, as well as the traffic and road signals. Many of these sensors incorporate radar or Light Detection and Ranging(lidar) technology, which sends out concentrated pulses of lights and lasers to measure the time required to observe the light when reflected off of the car’s surroundings. This helps determine relative distances and monitor the car’s surrounding environment [4]. Many companies developing self-driving cars, like Google, create a map of the road systems by having employees ride in vehicles equipped with lidar coupled with onboard technology to observe other drivers, pedestrians, and road conditions.

  The car’s software then processes all of these sensory inputs and the virtual map to plot a path and tell the car what to do. This path is decided by prediction and cost function-based algorithmic rules, which essentially calculate the potential costs of driving routes to choose a path that minimizes the amount of damage or the probability of damage occurring [6]. Additionally, obstacle detection and avoidance can help the software follow traffic rules and navigate obstacles. These programs use the input data to plan a course of action.

  However, what if these sensory inputs are not accurate? Then the AV “map” would be incorrect. As previously described, the Lidar technology uses concentrated light to make its measurements, but this has some limitations. One safety concern is that obscuring light, like the sun glare on a wet road, may interfere with the sensor’s ability to measure distances effectively. Additionally, intense weather like snow, heavy rain, or fog may obscure the path of concentrated light, limiting the lidar’s ability to “see.” These limitations open the car up to potentially severe accidents caused by seemingly innocuous incidents.

  Using lidar in combination with other types of surveillance and observance technologies will mitigate this risk as cameras and radar, alongside lidar, may triangulate upon accurate readings. Additionally, breakthroughs in radar technology, which are less limited by weather, contribute to better and fuller mapping systems of the road, leading to safer systems overall [3]. Intuitively, as one rises to higher levels of automation, more of these sensors and technologies will need to be used to guarantee accuracy because less driver attention, and more precaution, is required. So while level 2 automation may require three radar sensors and one camera, a level 5 AV may require more than ten radar sensors, eight cameras, and one lidar sensor onboard [5].

Do you think a car equipped with all of these sensors can see better than you can?

  The integration of these sensors is essential but difficult to consider. The vehicle must use the virtual map to identify important features, like pedestrians, lane edges, and other cars, to initiate a driving plan. Existing functions like the Convolutional Neural Network have proven helpful in this endeavor. The sensory data is used as an input, at which point the neural network processes the data and outputs the driving route for the car which is undertaken by the actuators and adjustments to steering, brakes, speed, and acceleration [1]. The training data for this neural network consists of camera frames and virtual maps with the essential features identified and an optimal path outlined. Then, with testing data, the car classifies its surroundings and plans the optimal route given its “training.” As a general rule, an increase in one level of automation requires around ten times more processing and computational power to integrate and process the information from more sensors in the network to reach an accurate output [5].

  However, the technology used in these sensory systems is continuously improving, and the costs are decreasing. When automated vehicles become an awe-inspiring reality, the technology may be even more equipped to handle all driving conditions, which can open up the world to the benefits of AV technology safely and efficiently.

REFERENCES
[1] - Babiker, M. A. A., Elawad, M. A. O., & Ahmed, A. H. M. (2019). Convolutional Neural Network for a Self-Driving Car in a Virtual Environment. 2019 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE). https://doi.org/10.1109/iccceee46830.2019.9070826

[2] - Cardinal, D. (2020, February 4). The Future of Sensors for Self-Driving Cars: All Roads, All Conditions - ExtremeTech. ExtremeTech. https://www.extremetech.com/computing/305691-the-future-of-sensors-for-self-driving-cars-all-roads-all-conditions

[3] - ‌Chaffin Mitchell (AccuWeather. (2021, January). How can self-driving cars “see” in the rain, snow and fog? Abc10.com; KXTV. https://www.abc10.com/article/weather/accuweather/self-driving-cars-radar-inclement-weather-rain-fog-snow/507-0438604e-ef32-4c0a-9634-99a6ec71fa12

[4]‌ - Guerrero-Ibáñez, J., Zeadally, S., & Contreras-Castillo, J. (2018). Sensor Technologies for Intelligent Transportation Systems. Sensors, 18(4), 1212. https://doi.org/10.3390/s18041212

[5] - Processing power in autonomous vehicles. (2018, August 20). EeNews Automotive. https://www.eenewsautomotive.com/news/processing-power-autonomous-vehicles

[6] - ‌Wei, J., Dolan, J., & Litkouhi, B. (n.d.). A Prediction-and Cost Function-Based Algorithm for Robust Autonomous Freeway Driving. Retrieved April 20, 2021, from https://www.ri.cmu.edu/pub_files/2010/6/2010_IV.pdf