Skip to main content

Hitachi
Contact InformationContact Information

    Highlight

    The labor shortages and rising freight volumes of recent years have been accompanied by advances in the use of automated transportation at warehouses and other facilities primarily operated by the manufacturing and distribution industries. Most of the automatic guided vehicles used for this purpose require guides such as magnetic tape or other markers to be installed at the operation site to identify their position, such that installation costs and maintenance work have become an obstacle to the adoption of these vehicles. In response, Hitachi has developed the ICHIDAS position estimation component, which works by using a laser scanner to acquire data on the geometry of the surroundings and then matching this to a map of the site. When installed in an autonomous vehicle, the component allows the vehicle to operate without the need for guides (magnetic tape or markers). This article describes the functions of ICHIDAS and presents an example of its use.

    Table of contents

    Author introduction

    Kohsei Matsumoto, Ph.D.

    • Center for Exploratory Research, Research & Development Group, Hitachi, Ltd. Current work and research: Research and development of artificial intelligence and robots. Society memberships: The Robotics Society of Japan (RSJ), The Japanese Society for Artificial Intelligence (JSAI), and The Institute of Electronics, Information and Communication Engineers (IEICE).

    Shuichi Maki, Ph.D.

    • Internet of Things Component Design Department, Drive Systems Division, Hitachi Industrial Equipment Systems Co., Ltd. Current work and research: Research and development of transportation robots and position estimation components. Society memberships: The Society of Instrument and Control Engineers (SICE).

    1. Introduction

    The labor shortages and rising freight volumes of recent years have been accompanied by demand for the automation of transportation work at warehouses and other facilities. To automate the above transportation work, autonomous vehicles have been used conventionally that employ guides such as magnetic tape. These vehicles identify positions and orientations (hereinafter positioning) by measuring magnetic tape attached on the floor or markers installed at operation sites. This technique, however, requires 1) installation work at the time of vehicle introduction and facility layout changes, and 2) maintenance such as mending peeled magnetic tape. These issues may affect shop floor operations and prevent the introduction of vehicles at operation sites.

    In response, as a technique without any guide (hereinafter “guideless”), techniques have been proposed, primarily in the field of robotics, that work by matching data on the geometry of the surroundings acquired using a laser scanner to a map that contains a representation of this geometry(1). While algorithms for this type of positioning are available as open source software libraries in some cases, when autonomous vehicle manufacturers employ them into their products, it is often hard due to potential issues such as licensing and the evaluation and extension of operational functions.

    In consideration of these problems, Hitachi developed the ICHIDAS component that combines a positioning function with the other operational functions such as map generation, which is required for positioning, with the aim of offering guideless positioning in the form of modules that autonomous vehicle manufacturers find easy to incorporate into their products(2).

    2. ICHIDAS Position Estimation Component

    2.1 ICHIDAS Overview

    Table 1—ICHIDAS (ICHIDAS2-AX) SpecificationsLaser scanners, map generation, and other functions can be selected depending on how ICHIDAS is to be used, thereby making it suitable for a wide variety of autonomous vehicles.

    Figure 1—Configuration for Autonomous Vehicles Using ICHIDASWhen introduced, ICHIDAS measures the surroundings using autonomous vehicles etc., and creates a map based on the collected data. The created map allows the positioning.

    ICHIDAS provides functions such as the use of a laser scanner for positioning in the form of modules that are intended to allow autonomous vehicles to operate without the need for guides. ICHIDAS's main specifications are shown in Table 1 and the structure of its use in autonomous vehicles in Figure 1. ICHIDAS is made up of a primary component that handles the actual positioning calculations and map generation software, etc. The component is a small industrial computer that, together with a laser scanner, is connected to an autonomous vehicle's cruise controller via a wired local area network. The laser scanner herein refers to a sensor that produces a point cloud data representing the geometry of the surroundings (hereinafter “scan data”). This point cloud is generated by pointing the laser beam in different directions and measuring the distance to any object in each direction based on the time taken for the laser light to reflect off the object and return to the sensor.

    The component performs positioning by matching this scan data to a map of the surroundings for each laser scanner measurement cycle and passes the result to the autonomous vehicle's cruise controller. Based on the above positioning result received, the autonomous vehicle can plan its route and control its cruise accordingly.

    As described above, by allowing the positioning function to be incorporated as a component into autonomous vehicles, ICHIDAS is intended to allow less processing load in an autonomous vehicle's cruise controller and development by autonomous vehicle manufacturers with the cruise controller and the component separate, along with eliminating the need for guides.

    Although a technique using the odometry of autonomous vehicles has been proposed for positioning, Hitachi chose a technique to do positioning only with a laser scanner since the use of odometry limits applicable autonomous vehicles. Meanwhile, the positioning calculations used in the component described here require a map of the surroundings available in advance. This is done using map generation software that automatically generates maps from scan data obtained by measuring the surroundings.

    The following sections describe the positioning and the map generation functions as well as their associated features.

    2.2 Positioning function

    Positioning is performed by matching scan data to a map of the surroundings. “Matching” herein corresponds to processing that, when the geometry of the surroundings is overlaid with the map expressed as an image, looks for the position and orientation that result in the higher proportional overlap of geometric features.

    When the above-mentioned matching-based positioning is assumed in autonomous vehicles, robustness against changes in surroundings and rapidity of computation supporting the measurement cycles of the laser scanner are two particularly important requirements.

    Firstly, robustness against changes in surroundings refers to suppressing degradation of the accuracy of positioning even if differences arise between the map and the actual surroundings due to some changes in facilities at operation sites. This is needed because, in an environment where autonomous vehicles operate, the actual surroundings often differ from the map due to carts being temporarily left out and other reasons. Such change-induced positioning errors are particularly prone to happening if the change occurs in an area close to the laser scanner. The reason is because the point clouds resulting from measurements close to the laser scanner have a higher density due to the structure of the laser scanner, which carries out measurement by irradiating lasers in various different directions with its rotating sensor head. When performing matching, therefore, these high-density point clouds tend to occur in regions of the grid that are close to the laser scanner (see Figure 2). Accordingly, it is intended to reduce matching error by deleting some of the points in the point cloud (scan data) that belong to regions close to the laser scanner, thereby reducing the density for this region and giving more weight to matching points on the map in regions distant from the laser scanner. Moreover, the larger the grid spacing of the map (that is, the lower its resolution), the more the point cloud will be concentrated in certain blocks of the grid with the consequence that matching errors are more likely to occur. To deal with this, the lower the resolution of the map is, the more points are deleted from the point cloud.

    Next, when it comes to the rapidity of processing, it is intended to speed up the positioning processing so as to keep up with the scanning cycle of the laser scanner used on autonomous vehicles. To be more precise, this speed improvement was achieved according to the framework of coarse-to-fine processing. This is a technique commonly used in image processing whereby matching is initially performed using low-resolution images, after which further matching using high-resolution images is performed only around the edges of this initial matching result. This makes the processing faster by narrowing down the search scope. In this case, separate high- and low-resolution maps are prepared for use in the positioning matching process, with an initial rough match being obtained using the low-resolution map and then a detailed match using the high-resolution map. In matching with low-resolution maps as described above, matching errors easily occur due to the loss of geometric features in association with the lower resolution of the maps. For this reason, when the low-resolution images are created, image processing is performed to preserve geometric features such as edges to reduce such errors.

    Figure 2—Positioning Performed by Matching Scan Data to a MapMatching is performed to align the scan data with a map in a way that results in the highest ratio of overlap. When there are differences between the map and the actual surroundings in the region close to the measurement position (left), it may cause an error in the matching result (right). ICHIDAS reduces this error by adjusting the density of the measurement point cloud that constitutes the scan data.

    2.3 Map Generation Function

    Positioning is performed by matching scan data to a map of the surroundings, which is an occupancy grid map. The occupancy grids herein refer to grids that are created by splitting up the space, and are used to represent the geometric features of walls and facilities by recording the existence probability per grid in the surroundings. In this way, the map of the surroundings takes the form of a bitmap image, and its pixel values are recorded with the existence probability of an object based on the existence of the laser reflection when the surroundings are scanned.

    This map needs to be created prior to positioning. It is produced in 2 steps: (1) the surroundings are measured, and (2) the automatic map is generated with map creation software.

    First, measuring the surroundings corresponds to a user operating an autonomous vehicle with a laser scanner using a remote controller to collect scan data of the surroundings. Also, automatic map generation is performed, based on the scan data collected above, by repeatedly (1) estimating the sensor's movement and (2) mapping the scan data to the map on the basis of the sensor's movement estimated above. Estimating the sensor's movement in (1) is carried out in the same way as the positioning calculation described above. However, whereas positioning involves matching scan data to a map, estimating the movement in (1) involves matching the next frame of scan data to the map using the first frame of scan data mapped. By matching the existing scan data as mentioned above, the position and orientation on the scan data can be obtained. Next, as step (2), by performing a coordinate transformation of the scan data for the position and orientation obtained in (1), it records (maps) a point cloud constituting the scan data to the bitmap image. The scan data are integrated on top of the bitmap image by repeating the above-mentioned sequence of steps, automatically creating a map of the surroundings.

    In actual operation, the surroundings are first measured at the time of vehicle deployment, creating a base map by which the autonomous vehicles are operated. However, after operation starts, at sites that are subject to frequent changes during operation, such as the repositioning of pallets, the positioning may be affected due to inconsistency between the actual surroundings and the map. For this reason, locations that are especially likely to see environmental changes on the map are designated in advance. When differences from the base map are detected while operating an autonomous vehicle, these differences are updated on the map online so as to prevent positioning errors.

    3. Use in Logistics Support Robot Lapi

    Figure 3—Logistics Support Robot “Lapi”The photographs show the plant where Lapi robots are operated (left) and a Lapi robot’s exterior (right). Lapi robots operate without any guides thanks to ICHIDAS.

    This section introduces an example of how ICHIDAS was installed in Lapis, logistics support robots, and automated material transportation at a factory (see Figure 3).

    Lapi robots are equipped with a maneuvering mechanism based on two independently driven wheels, a laser scanner, and ICHIDAS, etc., and work with humans in places such as plants, where basket carts are often employed. In consideration of the strong demand from operation sites for transportation without transferring loads between these carts, Lapi robots handle the traction of a wide variety of carts, detection and automatic connection to carts at empty cart bays.

    Figure 4 shows a map of the factory where the Lapi robots have been deployed, which supports an operation site with a length of about 110 m. At the time of initial deployment, the Lapi robots were driven around the site by remote control to collect scan data, and then map generation software was used to automatically produce a map. Using this map, unmanned transportation was carried out by Lapi robots at the operation site. This deployment demonstrated that ICHIDAS can be smoothly introduced at operation sites thanks to its guideless operation, and material transportation is possible in plants even in situations where there are differences between the actual surroundings and the map, such as carts left out for loading or people cutting in front of them.

    Figure 4—Map Created by ICHIDASA map of an approximately 110-m-long plant automatically created with ICHIDAS software. It provides a detailed representation of the site, including the layout of equipment.

    4. Conclusions

    This article has described the ICHIDAS position estimation component for autonomous vehicles, its positioning function that works by matching scan data with a map, and its guideless operation using this.

    Hitachi believes that the latter feature will expand the potential range of applications for autonomous vehicles by eliminating the need to lay down guides such as magnetic tape or other markers. In the future, Hitachi intends to continue adding new functions to suit the ways in which autonomous vehicles are operated.

    REFERENCES

    1)
    S. Thrun et al., “Probabilistic Robotics,” The MIT Press (Aug. 2005).
    2)
    K. Matsumoto et al., “Development of Map-building and Pose Estimation Component and Evaluation in Actual Environment,” The journal of the Institute of Image Information and Television Engineers, Vol. 68, No. 8, pp. J329–J334 (2014) in Japanese.
      Download Adobe Reader
      In order to read a PDF file, you need to have Adobe® Reader® installed in your computer.