Skip to main content

Hitachi
Contact InformationContact Information

[ⅱ]Autonomous Driving and Electrification for Safety and Security

Sensing Technology Leading Evolution of Autonomous Driving and Advanced Driver Assistance Systems

    Highlight

    Since the 1990s, Hitachi Group has been developing of autonomous driving and advanced driver support systems, mainly for automotive equipment. In order for the functions of autonomous driving and driver assistance to evolve, it is important to develop technology for sensing the driving environment. This article presents the multi-sensor configurations developed by Hitachi, and describe a stereo-camera that features three-dimensional capturing of the driving environment, a sensor fusion function using multiple sensors, and technology for implementing AI. It also describes Hitachi’s approaches in simulation and verification technologies for verifying increasingly advanced sensing systems.

    Table of contents

    Author introduction

    Shoji Muramatsu, Ph.D.

    Shoji Muramatsu

    • Autonomous Driving Technology Development Department, Advanced Mobility Development Unit, Technology Development Functional Division, Hitachi Astemo, Ltd. Current work and research: Advanced development of autonomous driving and advanced driver assistance systems. Society memberships: Society of Automotive Engineers of Japan (JSAE).

    Takeshi Shima

    Takeshi Shima

    • Sensing System Department, AD & ADAS Business Unit, Hitachi Astemo, Ltd. Current work and research: Stereo-camera and image recognition/processing development.

    Hitoshi Hayakawa, Ph.D.

    Hitoshi Hayakawa

    • Edge Intelligence Research Department, Center for Technology Innovation – Digital Platform, Research & Development Group, Hitachi, Ltd. Current work and research: Research and development of processing systems for vehicle information.

    Tadashi Kishimoto

    Tadashi Kishimoto

    • E/E Architecture Development Department, Advanced Mobility Development Unit, Technology Development Functional Division, Hitachi Astemo, Ltd. Current work and research: Development of implementation techniques for autonomous driving and advanced driver assistance systems.

    Masayuki Kanai

    Masayuki Kanai

    • Autonomous Driving Technology Development Department, Advanced Mobility Development Unit, Technology Development Functional Division, Hitachi Astemo, Ltd. Current work and research: Advanced development of autonomous driving and advanced driver assistance systems.

    Introduction

    Technology development and practical application of autonomous driving (AD) and advanced driver assistance systems (ADAS) is occurring at an increasingly rapid pace with the installation of sensors in vehicles for understanding the driving environment, vehicle controllers, and functions such as collision avoidance and hands-off driving on highways without driver operation. To further enhance safety and security, there is a growing need for improved sensing functions, such as detection of falling objects and sudden cut-in vehicles, which are difficult to detect with conventional sensors. In addition, as the usable range for autonomous driving functions expands, multi-sensor configurations that combine multiple sensors will become mainstream for enabling stable sensing in a wide range of driving environments.

    This article describes the basic configuration of driver assistance systems and the future configuration of all-around sensing systems, and present approaches for stereo-cameras, sensor fusion, artificial Intelligence (AI), and simulation verification of increasingly advanced sensing functions as a part of the process of developing sensing technologies for various use cases.

    Sensing System Configuration for AD/ADAS

    Automobile systems for sensing of the driving environment have become more sophisticated with advancements in sensor devices and microcomputer technology and with the growing needs for assessments that standardize safety functions for driver assistance and for autonomous driving that reduces the burden on the driver. Figure 1 shows the configuration of the system for sensing of the driving environment developed by Hitachi. The basic ADAS with functions such as collision damage mitigation braking, shown in (1) of Figure 1, is characterized by its ability to provide inexpensive forward sensing of the vehicle using a single stereo-camera with a stereoscopic view of a wide area in front of the vehicle. The standard ADAS shown in (2) of Figure 1, which includes an additional function to support the safety of the vehicle rear side, uses multiple medium-range millimeter-wave radars in addition to the stereo-camera to sense the entire area around the vehicle. In the AD and high-performance ADAS shown in (3) of Figure 1, a long-range millimeter-wave radar for detecting distant objects and multiple cameras or light detection and ranging (LiDAR) can be added to sense the entire surroundings for enabling even more advanced sensing functions. In sensing systems using these multiple sensors, a sensor fusion function that integrates the results of multiple on-board sensors to enable a redundant configuration where sensors complement each other to allow highly reliable sensing of the driving environment even in situations where certain sensors are impaired. The next chapter describes the features of Hitachi’s sensing system.

    Figure 1 — Hitachi’s Lineup of Driver Environment Sensing ConfigurationsFigure 1 — Hitachi’s Lineup of Driver Environment Sensing ConfigurationsThe basic ADAS is implemented using a stereo-camera. A camera and radar can also be added to achieve multi-functionality and improved reliability.

    Sensing Technologies for High Reliability and High Functionality

    Among the sensor configurations mentioned above, this chapter presents sensor fusion technology utilizing a stereo-camera for front sensing and multiple cameras and radar, and AI implementation technology for advanced recognition and decision-making in complex driving environments.

    Three-dimensional Sensing Using Stereo-cameras

    1. Overview of stereo-cameras
      Since 2008, Hitachi has been developing practical applications for stereo-cameras that simultaneously acquire three-dimensional information and image information using two cameras as sensors for detecting the external environment in driver assistance systems. Like the human eye, a stereo-camera can calculate the distance to an object from the disparity between the left and right cameras (see Figure 2). A stereo-camera generates parallax images (distance images) that are turned into three-dimensional information by calculating the disparity (distance) at each point of the image. From this three-dimensional information, they detect masses of a certain size as three-dimensional objects. After an object is detected, these cameras use image processing to identify the three-dimensional object. Currently, Hitachi is developing a next-generation stereo-camera as an all-in-one system with a wide detection area capable of detecting pedestrians that run into the street when the car makes a right or left turn at an intersection and capable of detecting distant vehicles necessary for the adaptive cruise function(1).
    2. Advantages of stereo-cameras
      A feature of Hitachi’s stereo-cameras is that they use both stereoscopic vision and AI-based object identification processing to achieve sensing that has a low processing load and is robust against environmental changes. In stereoscopic vision, the object is captured in three dimensions by using two cameras to measure distance. Even objects with unknown shapes or patterns can be detected, making it possible to detect objects and measure distances even when the entire object is not visible. By utilizing this characteristic of stereoscopic vision, for example, as shown in (1) of Figure 2, even obstacles whose shape cannot be identified beforehand, such as a person lying on the road, can be detected without going through the process of prior learning that is required for monocular cameras. As another example, as shown in (2) of Figure 2, the shape and distance of a vehicle can be identified even in a situation where a vehicle that cuts in front of your car is not entirely visible. These features of Hitachi’s stereo-camera enables stable sensing functions even in complex driving environments.

    Figure 2 — Measurement Principle of Stereo-camerasFigure 2 — Measurement Principle of Stereo-camerasParallax images are generated from the disparity between the views of the left and right cameras, and arbitrarily shaped three-dimensional objects (1) and (2) are detected from the parallax images.

    Multi-sensor Fusion

    To accurately recognize the situation around the vehicle, which is necessary for making driving decisions, Hitachi is developing a multi-sensor fusion function that integrates the detection results from multiple sensors such as cameras and radar, etc. (see Figure 3).

    The multi-sensor fusion function allows multiple sensors to complement each other in recognizing the situation around the vehicle, enabling reliable driving decisions even in difficult situations such as backlighting, where some sensors have impaired capability, or sensor failures. This function consists of a sensor adapter that absorbs and converts differences into common representations in the information detected by each sensor, object fusion for three-dimensional objects such as vehicles and pedestrians, lane fusion for roads such as lane markers and road edges, and map fusion for combining recognition results from sensor detection with map information. This function features “optimal detection synthesis,” which uses the detection error model of the sensors to improve the accuracy of detection over individual sensor detection by using and synthesizing the detection results for which each sensor excels, and a “sensing map,” which reconstructs map-like information from sensor information by extracting the road structure as lanes estimated from the positional relationships of lane markers and road edges detected by the sensors and their connection relationships. Optimal detection synthesis enables sensors to be added for obtaining the position and speed of three-dimensional objects more reliably and accurately, and the sensing map enables responses to road construction and other unexpected situations that are not included in the map information, thereby contributing to safe driving through continuous autonomous driving and driver assistance.

    Figure 3 — Multi-sensor Fusion FunctionFigure 3 — Multi-sensor Fusion FunctionUsing the detection results of three-dimensional objects and lane markers from each sensor, three-dimensional objects are integrated by object fusion, lane information such as lane markers is integrated by lane fusion, and these integrated results are combined with map information by map fusion to output the situation around the vehicle with high reliability and accuracy. In object fusion, detection accuracy is improved by utilizing the advantages of each sensor to reduce errors through optimal detection synthesis technology. Lane fusion uses sensing map technology to extract road structures such as lane connection relationships and boundaries to deal with construction and other unexpected situations that do not appear in the map.

    Automotive Installation of AI Functions

    1. More advanced sensing technology through AI
      When detecting objects or driving areas from sensing data captured by cameras or LiDAR, the features of the detection target have conventionally been defined as rules. However, it is difficult to define rules for all situations, and AI technology is expected to be used to deal with complex driving environments. Growing attention is being focused on AI as an elemental technology for more advanced sensing functions because it can perform detection of more diverse objects, even in complex driving environments, by learning from training data, eliminating the need for developers to define rules. On the other hand, AI is generally difficult to process in real time on devices used for automotive equipment due to its heavy amount of computation. To address this issue, Hitachi is working on technology to reduce the amount of AI computation to a level where it can be used in automotive devices.
    2. AI compression and implementation technique
      AI operation consists of extracting features from input data through multiple layers and finally outputting the detection results of objects, driving areas, and so on (see Figure 4). Each layer consists of multiple nodes, and each node is a calculation unit that responds to a specific feature in the input data and calculates the feature value.
      To reduce the amount of AI computation, the number of nodes mentioned above must be reduced, but simply removing nodes will greatly degrade recognition performance. Consequently, the impact (sensitivity) of each node on the recognition accuracy is analyzed beforehand, and this is collected for each layer for calculating the impact (sensitivity) of the layer. This makes it possible to reduce the amount of computation without significantly degrading recognition accuracy by making adjustments so that many of the nodes in layers with low impact on recognition accuracy are removed and the nodes in layers with high impact on recognition accuracy are mostly retained. In a test case using sensitivity analysis technology and compression rate determination technology(2) developed by Hitachi, it was confirmed that it was possible to reduce the amount of AI computation by 88% with a 1.6% degradation in accuracy compared to before compression.

    Figure 4 — Overview of AI Compression and Implementation TechnologyFigure 4 — Overview of AI Compression and Implementation TechnologyBefore compression, the impact of each node on recognition accuracy is analyzed and collected by layer. Compression is performed by setting a large node removal ratio for low impact layers and by setting a small node removal ratio for high impact layers.

    Verification Technology for More Advanced Sensing Functions

    As sensing functions become more advanced, the number of functional verification items has increased significantly, and there is a limit on the testing that can be conducted using actual vehicles alone. In addition, cameras that recognize the area around a vehicle need to operate in a wide range of weather conditions and other traffic environments, and these environments must be recreated for verifying the functionality of these cameras. In sensing function verification, Hitachi is working to better recreate the driving environment as a simulation without using an actual vehicle, and has built a hardware in the loop simulation (HILS) system that uses a development environment for stereo-cameras that uses high-definition computer graphics (CG) (see Figure 5).

    Figure 5 — Configuration of High-definition Camera HILS SystemFigure 5 — Configuration of High-definition Camera HILS SystemThe left and right images generated by the simulator are input to the camera to evaluate scenarios that are difficult to recreate using a real vehicle.

    The HILS that was developed uses high-definition images created by computer in order to match improvements in image sensor performance, such as higher resolution. To simulate the high-definition images as image data obtained from the image sensor, Hitachi developed a CG-injection unit that processes the images and transfers them to the stereo-camera memory in real time. The developed HILS environment enables simulation of various traffic environments and verification of the sensing functions of the cameras.

    In functional verification, it is important to use simulations to evaluate rain, backlighting, and other conditions where cameras have difficulty performing their sensing functions. This requires realistic CG recreation that resembles the real world. Hitachi is participating in the Cabinet Office’s “Strategic Innovation Promotion Program (SIP) Phase Two: Automated Driving (Expansion of Systems and Services)” for the “Building a Safety Evaluation Environment in Virtual Space”(3) and is developing simulation technology.

    In this program, physical phenomena are modeled virtually based on sensor detection principles, and a simulation model that reproduces the inside of the image sensor more accurately is being developed to build a simulation environment having high consistency with the real world for use in functional verification of cameras (see Figure 6).

    The developed simulation environment will enable evaluation of scenarios that are difficult to recognize or are dangerous, and will reduce the number of tests that used to rely on actual vehicles, thereby contributing to more advanced sensing functions.

    Figure 6 — Virtual Modeling Based on Physical PhenomenaFigure 6 — Virtual Modeling Based on Physical PhenomenaA simulation model having high consistency with the real world is developed and its limit performance under rain and other recognition impediments is evaluated.

    Conclusions

    This article described Hitachi’s approach to sensing technology in AD/ADAS, which is expected to make further advances going forward. Hitachi will continue to contribute to a safer and more secure automotive society by integrating and fusing multiple sensors, such as stereo-cameras and millimeter-wave radar, to develop advanced sensing functions that will help reduce accidents and provide greater convenience.

    REFERENCES

    1)
    K. Terui et al., “Sensing and Vehicle Control Technologies that Provide Safety and Comfort with Improved QoL,” Hitachi Review, 70, pp. 56–61 (Jan. 2021).
    2)
    D. Murata et al., “Automatic CNN Compression System for Autonomous Driving,” 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 838–843 (Dec. 2019).
    3)
    “Strategic Innovation Promotion Program (SIP) Phase Two: Automated Driving (Expansion of Systems and Services): Building a Safety Evaluation Environment in Virtual Space(PDF, 8.3MB)