80 likes | 100 Views
<br>This paper focuses on autonomous vehicles. The importance of autonomous vehicles is displayed by the hundreds of billions of dollars invested in this technology by the automotive industry. Smaller companies that started from the ground up and developed autonomous technology were suddenly purchased for billions of dollars showing how the race among companies to be the first to reach the autonomous vehicle goal of full autonomous capability. This article will discuss three critical elements that are a crucial part of the overall AV system which are Sensor Fusion, Localization, and Comp
E N D
Autonomous Vehicles Mukhtar Oudeif, Abdo Mohsen and Mohamad Alasry University of Michigan – Dearborn ABSTRACT: depend on data communicated by these sensors to enable certain self-driving features. The main objective is to elaborate on the limitations, techniques, and benefits of these three subsystems and discuss the development that’s currently being conducted to overcome the challenges to provide a more robust AV features. This paper focuses on autonomous vehicles. The importance of autonomous vehicles is displayed by the hundreds of billions of dollars invested in this technology by the automotive industry. Smaller companies that started from the ground up and developed autonomous technology were suddenly purchased for billions of dollars showing how the race among companies to be the first to reach the autonomous vehicle goal of full autonomous capability. This article will discuss three critical elements that are a crucial part of the overall AV system which are Sensor Fusion, Localization, and Computer Vision. RELATED WORK: Autonomous Vehicle Sensors and Sensor Fusion: AV is the future for the auto business and to accomplished it they should convey a product that are reliable and safe. AV should be able to recognize the surroundings environment to avoid any mishaps and accidents. Sensing system play an important role in AV where it can provide a support on measuring objects, distance, and providing data feedback to the computer module. In figure 1, a block diagram of an autonomous vehicle system is shown. INTRODUCTION: With an increase demand for highly effective safety system for Autonomous vehicles, Active and passive safety systems are a major part of this technology. Many of the auto companies today are accelerating the shift toward autonomous driving due to the benefits and many issues this type of technology can help resolve. Vehicle accidents, congestion, emission, convenient services, resources dependence and fatalities are among some of the huge challenges that the automotive industry are currently facing, but autonomous vehicles have the potentially to help reduces these issues. The autonomous vehicle consists of many systems, but the intent of this article is to highlight three critical subsystems that play an essential role in allowing self-driving technology. For example, in order to enable route planning and decision making, a vehicle must be fully capable of understanding its surrounding environment and provide accurate measurements to onboard computers which is accomplished by utilizing sensors and sensor fusion. In addition, many algorithms with the vehicles such as computer vision and localization Figure 1: High Level Autonomous Vehicles System Block Diagram Significant components like Liadar, Fusion sensor and radar along with other devices like IMUs and GPS are utilized to measure surrounding environment and deliver data to computer for
Page 2 of 8 analysis plan, control, and decisions. Objects detecting is one of the AV challenges. Fusion Sensors and Radar are the main factors for the AV to understand the surroundings environment. These elements are important to have a robust and precise algorithm where it can give the best performance under any weather condition, which can supplant human vision sense in the AV. Sensors algorithm for the AV can be utilized with many different approaches based on its quality, performance, and configuration. The AV uses liadar, camera, and radar sensors as inputs to other devices based on their reading to make determination for certain maneuvers. These elements collect input data to calculate precise distance between all objects and provides feedback to controller unit to navigate routes and allowing vehicle to calculate maneuvers, so it can make decision of stopping and avoiding obstacles. Camera modules could have different performances, The HD cameras can give better data for images processing. To accomplish that, HD cameras needed more computing power and the output results in The AV uses liadar, camera, and radar sensors as inputs to other devices based on their reading to make determination for certain maneuvers. These elements collect input data to calculate precise distance between all objects and provides feedback to controller unit to navigate routes and allowing vehicle to calculate maneuvers, so it can make decision of stopping and avoiding obstacles. Camera modules could have different performances, The HD cameras can give better data for images processing. To accomplish that, HD cameras needed more computing power and the output results in greater file size. The components of the Radar play an important role in AV and used as input to many vehicle applications. Also, Radar relies upon the Doppler Effect which provide an efficient method to measure speeds of vehicles which is valuable for sensors algorithm. Liadar is fundamentally a sonar that conveys laser pulse to plan the distance between the sensors location and detected objects. The below figure shows the occupancy grade mapping that used for navigation. Figure 3: Sensor Setup around vehicle greater file size. The components of the Radar play an important role in AV and used as input to many vehicle applications. Also, Radar relies upon the Doppler Effect which provide an efficient method to measure speeds of vehicles which is valuable for sensors algorithm. Liadar is fundamentally a sonar that conveys laser pulse to plan the distance between the sensors location and detected objects. Liadar perform better in poor environments comparing with other object detecting sensors, because they can be use with longer wavelengths. Liadar resolution depends on the numbers of scans in one direction. Utilizing more fusion sensors can provide better performance and robustness. Figure 2: Occupancy-Grade Mapping
Page 3 of 8 Figure 3 is showing how sensors setup and coverage around the vehicle. Authors mentioned that the 2D LiDAR’s captured a single row image. Captured image resolution will be transforms from cartesian to cylindrical coordination’s, then will change it to pixels with angular resolutions. Next, calculation will be based on the connectivity distance as well as the ratio. Researchers shows two steps of object tracking, predicting, and updating steps. Figure 4 shows how tracks are managed by the number of consecutive associations. Figure 5(: (a) Point cloud from for 2D-LIDARs (b) ID, Size, Direction and Velocity of tracks There are many different algorithms for calibrating sensors, Authors found out that a pseudo inverse based algorithm is the best method to calibrate sensors fusion, which can represents coordinate of the target information in image plane. Each of the sensors used has coordinate value that can represented by a point on its image plane. Authors showed how they used the linear least squared method to find the transformation matrix. Also, they used several experimental to collect data to get the transformation matrix from the vision and radar target so, they can calculate the calibration accuracy. To calculate the coordinate calibration, four equations are needed to obtain the transformation matrix. Figure 4: Track Status Management Diagram Kalman Filter was used for the predication step and JPDAF was used for the update step. The 2D Lidar and Radar are not sufficient, KF predication must be linearly designed. The tracks updates will be based on the estimation inside the certain distance. Several estimations are obtained from one object because of occlusion, over division, and sensor fusion. Similarly, tracks can share, update estimation based on probabilistic load by JPDAF. Track size are the most part determined by the value of measurements. Figure 5(a) is showing the point cloud obtained from 2D Lidar’s. Figure 5(B) is showing Tracks occurred based on size, velocity, and direction. Utilizing various types of sensor can provide a better result about the surveillance, not only for the dynamic object but with stationary objects, as well as their three-dimensional shapes. Also, Authors showed that utilizing a couple Lidars in rear vehicle and a couple Lidar in front vehicle can give better performance for tracking and detecting objects from any direction. Applying good calibration algorithms method for sensors will allow to detect surrounding environments with best consistency and precision. Robust localization Methods for Autonomous Driving: Localization is one of the most critical components that reside in the center of every autonomous vehicle. The idea of enabling route
Page 4 of 8 planning and decision making for a vehicle begins with location measurement. This essential concept depends on data from various sources, measuring devices such as cameras, LIDAR’s, ultrasonic and radars sensors. In the figure below, we can see an example of a Ford’s research autonomous vehicle platform used in localization and perception. provide the autonomous community a robust and accurate vehicle position detection method. Each author argues that some method they design against have limitations and for this purpose, they attempt providing an alternative solution that can help overcome the issues realized. For example, a camera alone can have difficult time calculating position due to poor environments. These components measure a surrounding environment and supply real time data to onboard computers for further analysis. The localization algorithm utilizes this data to calculate and pinpoint an accurate position within an environment. Once the vehicle position is determined, then different autonomous features are enabled, like understanding where you with respect to other objects which enables obstacle avoidance. Method 1: Map-Based Localization Method for Autonomous Vehicles Using 3D-LIDAR Sensors Authors point out that the tradition method of detecting vehicle position is not sufficient because of noise introduced from different sources that causes interruption while collecting data from GPS and IMU sensors. They also argue that using a camera only is troublesome when it comes to position detection because in poor environments, there will be visibility issues and only robust in flat road surfaces. The authors proposed the use of a 3D-LIDAR sensor, Multi-frame Generation algorithm along with a Kalman Filters which they believe is a more robust method to detect road features in complex settings and increase position detection accuracy, the block diagram is shown in the figure below. Figure 6: Ford's Example with Sensing components used for localization. Understanding that this concept is a crucial feature of autonomous driving, developer realized that a robust system is required to enable accurate position measurement. This is a challenging task because in the real world, environments have different settings and conditions. Some areas are more complex than others; for instance, urban environments are very crowded and road structures are different from one place to the other. Also, weather conditions play a major role and can cause noise in the data which can impact software algorithms and prevent them from provide an accurate reading of vehicle position. In all three technical papers I reviewed, the authors presented different techniques that they believe could help tackle some of these difficult issues and Figure 7: Block diagram for the Multi-Frame Algorithm
Page 5 of 8 LIDAR. This data is then calibrated so that overlapping areas in a map is aligned using a known optimization technique called GraphSLAM. Calibration of the data and Lidar will result in high- resolution representation of the environment. Finally, the probabilistic map gets generated by storing laser beam intensity and aligned data in map cells including variance data points which is sufficient for mapping and localization objectives. The Multi-Frame algorithm will use data from the 3D-LIDAR sensor to identify road marking and curbs. The output of the process goes into another block known as the Iterative closest point ICP algorithm to calculate and minimize the difference between two points. Then, this output goes into the Kalman Filter to eliminate noise, outliers and a better estimation of object in a surrounding environment is achieved. GPS will use this data to help determine an accurate vehicle position. The method enabled self-driving in busy urban areas with no failures where it was very difficult to achieve that using traditional method to detect vehicle position in real time. In addition, the team performed autonomous testing to verify robustness of the concept and to verify if the algorithm would adopt to changes. When the team conducted the testing, both the lateral and longitude offset are reduced by a lot as shown below which helped achieve the design objectives. After the designing was completed, the authors experimented this concept on school campus and compared the output results against a dataset that was recorded using only low-cost GPS, INS and SF. The table below shows the proposed method yields 58% improvement in location detection in comparison to methods argued by the authors. Figure 8: Experimental results Method 2: Robust Vehicle Localization in Urban Environments Using Probabilistic Maps In this article, the authors presented an enhancement map-based method that considers map data from a probability distributions aspect instead of a fixed representation of the surrounding environment. According to the authors of this paper, they believe that the new approach is more robust and will consider cases scenarios that laser scanning method are not able to account in difficult condition when calculating vehicle position. Method 3: Robust and Precise Vehicle Localization based on Multi-sensor Fusion in Diverse City Scenes To achieve the intended output for the probabilistic map, first, the data is obtained from different measuring devices such as GPS, IMU and
Page 6 of 8 In this study, the proposed system was design to achieve high accuracy at centimeter-level through the use of data from various types of sensors such LIDAR, GNSS, and IMU in many parts of the city. The system depends on the LIDAR measurements to achieve high efficiency and accuracy which was possible to accomplish because the GNSS RTK module uses the help of multi- sensor fusion to determine vehicle position. Although this sensor fails to perform effectively during severe conditions and during constructions, in this study, the problems confronting rain and snow conditions have been resolved by using this system. The system was tested using LiDAR map generation, LiDAR-based localization, horizontal localization, and heading angle estimation. With this technique, the Lucas-Kanade algorithm was applied, Cumulative probabilities were also calculated by using heading angle estimation. GNSS localization and sensor fusion were also completed using quantitative analysis. solve a certain problem. Below is a summary of these methods and an explanation of what each design is solving. From my perspective, I believe it’s very important that an autonomous vehicle is able to determine its position accurately regardless of the condition or environment. For this reason, a vehicle should contain multiple ways to measure its location. This way in case one method fails, there’s always another source we obtain data from. In addition, I think it’s more robust to conduct multiple measurements and compare for more accuracy. Computer Vision: Computer vison plays a critical role in the effective and safe operation of autonomous vehicles. This system used both sophisticated hardware and algorithms. Some of the hardware used are Radar and LIDAR. Also, sophisticated cameras are used to capture images as well. The camera must capture these images at rapid speed and formulate an image to be communicated back to the vehicle at which point the vehicle algorithm dictates the appropriate response. Camera’s that are applicable to this system are top of the line cameras and generally very expensive. The better the image, the better the vehicle can conclude what it is seeing and respond to avoid a dangerous situation like a vehicle accident. The system was considered to be efficient and robust during severe weather conditions including rain and snow. All of these were tested in a real-based scenario. A dataset was created to collect data including 60km driving traffic in a simple method in different urban roads to test this designed system. A robust localization system was developed that uses information from the sensors such as LiDAR, GNSS, and IMU for high accuracy in localization and urban areas, highways, tunnel, and rural areas. This system was implemented in a substantial autonomous driving fleet, and it helped the vehicle in making it independent in busy areas with the help of components and algorithms LiDAR-based localization, horizontal localization, and heading angle estimation Modeling is used heavily to develop the Vision system and related algorithm. An algorithm discussed in a previous technical paper is where filtering was used to eliminate noise. The filter used is one of the best used in industry otherwise known as Gausian. Noise can cause a disturbance and affect and delay the vehicle response to a dangerous situation. The algorithm also works to detect edges to better formulate a photograph. Methods are used to formulate the gradient and direction of the edges. This is also done in a fashion that is intermittently. Thus, gaps are left, and the algorithm is programmed to complete the edges. Based on this requirement, some pixels are eliminated. Another part of the algorithm works to develop contours of the image it is looking at. Some Vision systems are so sophisticated that they can eliminate unimportant images. The impact of this step helps produce a faster signal and thus response. This is critical since Figure 9: Using Sensor Fusion to collect data and calculate vehicle position block diagram. In my technical papers, I reviewed three different methods that tackle three different problems and each proposed design is unique and
Page 7 of 8 sometimes a response must occur in the millisecond range to avoid an accident. in terms of variety that I found interesting was how the engineering team placed their cameras and sensors. Some had them all over the vehicle while another seemed to be more strategic in their placement. I believe this is because the ladder was more advanced at that point in the development and the former was retrofitting a vehicle for some initial testing. Other steps the algorithms take is to overcome distractions like shadows. The Vision system can use street lanes to determine if the shadow is relevant or not. However, there are limitations. For instance, if the street lanes are faded, then the vehicle may struggle to respond based on what the Vision System is providing in terms of data. C CONCLUSION ONCLUSION: : Autonomous vehicles may be the next big thing to hit the stage and the biggest thing in the automotive industry since the assembly line was invented. With the use of systems such as Localization, Vision System, and hardware such as various sensors, LIDAR’s, autonomous vehicles continue to improve as they go through rigorous testing. Many companies have already completed millions of miles of successful testing with software being the backbone of this vehicle. Extreme testing should be performed on the Vision System. A test that was written about was completed across most of Europe. The vehicle was a basic vehicle and retrofitted with autonomous vehicle parts. Ten cameras were used for the Vision system. These cameras were placed on various parts of the parts as determined by the engineering team in terms of importance. Five LIDARS were used on this vehicle. LIDARS are very expensive but critical for good and safe functionality. Integration of the system was greatly emphasized for the response of the system to be optimal. The images caught by the camera were sharp and mapped correctly. This vehicle was built at a lower price scale and it still passed many rigorous tests and reached its final destination successfully. R REFERENCES EFERENCES: : [1] Liang Wang, Y iguana Zhang and Jun Wang. “Map- Based Localization Method for Autonomous Vehicles Using 3D-LIDAR Sensors.” IEEE (2017). Web. [2] Levinson and S. Thrun, "Robust vehicle localization in urban environments using probabilistic maps," IEEE (2010). Web In all three of the research papers written on and studied in terms of computer vison, all 3 used generally the same hardware and similar software strategies. LIDAR’s and radars were used extensively. Furthermore, modeling was also used extensively in the papers to help debug issues before they are found on the car and to theoretically test. Some differences were how many different sensors were used and the positioning of the sensors. Furthermore, in terms of the algorithm, some of the algorithm used seemed to include more specific steps. The general steps seem to be the same as they all touched on things like edges, contouring, filtering. However, some of the camera systems differed in terms of the way the overall picture was developed. [3] Guowei Wan, Xiaolong Yang, Renlan Cai, Hao Li, Yao Zhou, Hao Wang, Shiyu Song. "Robust and Precise Vehicle Localization based on Multi-sensor Fusion," IEEE (2018). Web. [4] Jelena Kocić, Nenad Jovičić, and Vujo Drndarević. "Sensors and Sensor fusion in Autonomous Vehicle." IEEE (2018). Web. [5] Kinn Na, Jaemin Byun, Myongchan Roh, Beomsu Seo. " Fusion of Multiple 2D LiDAR and RADAR for Object Detection and Tracking in All Directions" IEEE (2014). Web. [6] Jihun Kim, Dong Seog Han. " Radar and Vision Sensor Fusion for Object Detection in Autonomous Vehicle Surroundings." IEEE (2018). Web. One major variety that I noticed was the amount of LIDAR’s used as presented in the different papers. Some used very little while another used five. This was due to cost. Another difference [7] Jinghua Guo, Ping Hu, and RongbenWang (11-11- 19).’Nonlinear Coordinated Steering and Braking Control of Vision-Based Autonomous Vehicles in
Page 8 of 8 Emergency TRANSACTIONS TRANSPORTATION SYSTEMS, VOL. 17, NO. 11 [8] Broggi et al., "Extensive Tests of Autonomous Driving Technologies," in IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 3, pp. 1403-1415, Sept. 2019, Obstacle Avoidance’ IEEE [9] H. Song, "The Application of Computer Vision in Responding to the Emergencies of Autonomous Driving," 2020 International Conference on Computer Vision, Image and Deep Learning (CVIDL), 2020, pp. 1-5, ON INTELLIGENT