Skip to main content
KIMURA Nobutaka (Center for Technology Innovation – Digital Technology / Intelligent Information Research Dept. / Senior Researcher)
SAKAI Ryo (Center for Technology Innovation – Digital Technology / Intelligent Information Research Dept.)
Distribution warehouses have seen a dramatic increase in picking operations as e-commerce continues to expand, and there is a pressing need to optimize operations to increase efficiencies. Hitachi, Ltd. worked with the University of Edinburgh, U.K. to develop a multiple AI coordination control technology that integrates the control of picking robots and automated guided vehicles (AGV) to smoothly pick up specified products from goods carried by AGVs.
With conventional technology, AGVs were required to stop for 5-seconds for products to be removed ("picked") from the AGV’s load. With the new multiple AI coordination control technology, products can be picked without stopping, resulting in a 38% reduction in operation time. The following is an interview with Dr. KIMURA Nobutaka and Mr. SAKAI Ryo, two core members who were involved in the development of this technology.
(Publication: September 6, 2018)
KIMURAUp until now, Hitachi has responded to the challenge of optimizing the efficiency of distribution warehouses through things such as Racrew, a compact AGV. While this developed technology for "carrying goods," we had yet to automate the picking process, where people still had to “grab goods” with their hands and arms.
Automating picking is difficult because the movements of robots are neither flexible nor quick. While robots can repeatedly accomplish a certain set of tasks, there is an obstacle in accelerating the speed of the robots because different movements are required when handling various products at distribution warehouses.
Until now, AGVs would stop when the robot arm would pick, and the robot arm would then think before finally grabbing the product. Multiple AI coordination control takes this process one step further and enables picking without having to stop the AGVs.
A key point of this technology is that we combined three AIs: the AGV control AI, the robot control AI, and the picking method determination AI. While each AI moves autonomously, the picking method determination AI is particularly important and it makes decisions comprehensively while keeping an eye on the movements of both the robot control AI and the AGV control AI. The addition of the picking method determination AI, which acts as the control tower, is the new aspect introduced with this technology.
The picking method determination AI photographs products inside cases that are carried by AGVs, determines in real time which products to pick and at what speed to pick them, and sends this information to the robot control AI and the AGV control AI. When each control AI autonomously executes these instructions, we realize a high level of coordinated control.
A process which took 13 seconds with the conventional method where the AGVs stopped before picking products was reduced to about 8 seconds with the new technology. As we are still in the experimental stage, the AGVs do not move too fast at a speed of about 0.5 m per second, but if the performance of cameras increases along with accuracy of image recognition, we will be able to pick goods while the AGVs move at faster speeds.
Even at this stage, we are able to pick one product while the AGV moves, but in the end, we want to realize a system where a robot arm can pick multiple products from moving AGVs.
SAKAIJust like us, there are many R&D cases dealing with the task of "grabbing an object and placing it elsewhere." However, with the method of picking a single product after the AGV comes to a full stop and waiting for another AGV to stop before picking another product, even if the workers could be replaced by robots, it is a system that would create time losses, and there would be no change in the fact that the system is insufficient.
Our multiple AI coordinated control technology has a strong uniqueness in that it can pick products while the AGVs move without losing time. In addition, our AGVs move considerably fast. Moreover, the backend computer that controls the AI should be an ordinary desktop with a GPU worth a few tens of thousands of yen. There is no need for a dedicated server and its size is scalable depending on usage.
KIMURAWhile Hitachi came to develop an autonomous mobile dual-arm robot control technology in August 2015; about 5 months earlier, the British Embassy set up an event where a large number of researchers from the U.K. came to observe Japan’s robotics technology, including that of Hitachi. Many professors involved in robotics from various universities in the UK, came to the Hitachi Research Laboratory, My research leader, Mr. Takashi Watanabe, explained our research to them. Following this event, we have been in regular contact with professors from the University of Edinburgh who were among those who visited Japan.
The professors from the University of Edinburgh were conducting research on robotic control under dynamic environments, that is, robotics control where the surrounding environment is constantly changing. For example, situations such as pouring water into a cup that a human is holding or tracking the smallest movements in a human hand. They had technology that could flexibly respond to changing environments and could plan the optimal movements of robot arms using a single algorithm in cases where movements in a certain environment (for example, AGV) and an object to be picked were both set.
We thought that if we could utilize this technology well and mix it with our longstanding recognition technology which seeks to understand surrounding situations, we could solve some of the problems currently seen at distribution warehouses. In the end, we were able to integrate the technologies of both the University of Edinburgh and Hitachi, and I take pride in having been a part of creating a system that does not exist anywhere else in the world.
SAKAIAlthough we were able to set the optimal route for specified locations for robot arms utilizing technology from the University of Edinburgh, we could not determine which goods they could pick in circumstances where objects moved around. Although it worked with goods that can be picked with ease, it could not determine which goods to pick if they were disorganized in any way. We developed the necessary recognition technology at Hitachi in order to solve this issue.
Specifically, we prepared 42,000 images depicting the inside of cases packed in various ways and simulated the picking process a few hundred thousand times based on the corresponding 3D model. Grouping the results and these images, we generated the training data, which helped us establish the recognition processing through deep learning.
By pairing this technology with that of the University of Edinburgh, we were able to create a system that was able to decide which objects to pick from items located in complex environments, which were simultaneously moving. Moreover, upon repeating experiments at two different speeds, 0.3 m per second and 0.5 m per second, the AI was able to determine that the AGV should be moved at 0.5 m per second when picking objects that are upright and close, and at 0.3 m per second when picking objects that are tilted or are difficult to pick.
In collaborating with the University of Edinburgh, we often sought to understand their technology by reading technical theses written by Ph.D. students and postdocs who studied directly under the aforementioned professors. They also had knowledge about machine learning, and after we had understood our respective technologies, we had conversations about what type of system to create and continued communicating even after we had returned home.
However, technologies developed by universities are academic in nature and cannot be implemented immediately into prototype systems for industrial applications. We were able to inform them of our requests, and they continued to make improvements. When we would make requests, they would immediately respond, “We definitely were not thinking about that. We will fix it right away so please wait.” When the improved versions were sent to us, we would verify and integrate them, and by repeating this process we developed the system.
KIMURAFor days on end, we sought how to come up with groundbreaking research. Research had already been conducted on simply picking moving objects. In mulling over how to make our research distinct from this, and how to create value, we came up with the idea of having machines learn about both robot arm movements and appropriate AGV speeds. This idea became the crux of multiple AI coordination control technology.
While we viewed the results of the operation plan after incorporating the technology of University of Edinburgh and discussed and speculated why the arm could not pick a certain item, Mr. Sakai said, "What would happen if we decreased the speed of the AGV?" This led to a breakthrough. When we conducted the experiment again after decelerating the AGV, we were able to confirm that the success rate of the picking had changed. Because of this, we began to think that there seemed to be some form of principle which says, "In order to pick items in certain states, AGVs must be moved at around this given speed."
SAKAIThroughout the development process, two things in particular proved to be difficult. The first was that objects move inside the cases, including the object to be picked. Related research has focused on different methods of picking objects, but this assumes objects are not moving and focuses only on the tips of the robot arms. However, moving objects become obstacles for robots and even with the correct allocation of the tips, it is possible that the targeted object may not be be picked if the robot arm bumps into its surroundings. Thus, we needed new technology that could learn how to properly pick objects, that is, a technology that allows robot arms to calculate trajectory plan to certainly execute in that situation with moving obstacles.
Multiple AI coordination control technology is about realizing something that is difficult for robots, even if it is easy for humans. In order to find points that allow for the picking to go smoothly, we needed massive quantities of training data about various items that could be picked for machine learning. The second problem was how to gather this data. In academic research, performance can be evaluated even if only public datasets are utilized, but for technology that will be used in the field, we needed to discuss how to collect the data.
We also had no references on how to create this dataset, and so we were often blindly groping for improvements. While we prepared 42,000 image data for this experiment, because there were around a dozen objects inside the cases per image, we ended up gathering several hundreds of thousands of data points, an enormous quantity of data. For this reason, we had to photograph cases with objects in them over and over, after switching the objects.
Even if we could automate a single job through these robots, we were just creating more work if we could not reduce the operations of dataset creation. There should be a way to automate the process of creating training data, thus simplifying and shortening the process.
KIMURAMy major in university was mechanical engineering, and I conducted research in a lab which focused on the sound of objects. Specifically, I was analyzing why brake noises (the phenomenon of high-pitched noises that occur when you put the brakes on a motorcycle) occur, but I hoped to switch fields and come to be involved in new technologies and products after I graduated.
Since joining Hitachi, I have been in departments related to robots. At first, I conducted research about autonomous mobile robots, and utilizing this, I started the productization of conveyance robots in factories starting around 2009. In 2014, we productized Racrew which carries shelves, and I felt as though we had more or less succeeded at carrying objects. It was about that time that I thought I would set a goal to create a robot with an arm that can complete some task, more precisely, a robot that could grab objects. In 2015, we developed a mobile dual-arm robot that works inside warehouses and published a new release.
Broadly speaking, the technology developed on this occasion can be categorized as environment recognition. In one sense, it is a research that looks at its surroundings with sensors and recognizes objects through cameras, and it is one of the main fields that Hitachi has focused on within robotics. By combining it with AI research where robots elicit answers using massive quantities of data through machine learning, we found fertile ground where multiple AI coordination control technology could be discovered.
While this experiment presupposed distribution warehouses, we hope to increase the domains where industrial robots that think and act intelligently can flourish, even in industrial processes such as assembling, cutting and drilling.
SAKAIEver since I was a child I’ve been interested in intelligent machines, and within the field of robotics, I studied biomimetic robots, specifically, bipedal robots in university. The word “mimetic” means to "copy" or "to imitate," and this means that the robots are created based on hints gathered from organisms.
In 2016, I was placed in the Intelligent Information Research Department and have been working to realize technology that allows robots to grab objects. We believe that we can make robots more intelligent through machine learning, and for this technology I was in charge of developing machine learning techniques.
At a basic level, because the places where robotics can be applied are limited, the current robotics field is in a place where it can easily attain results. Once we accomplish automation in these domains, I hope to create likable humanoid robots that regular people can enjoy and robots that can move flexibly.
KIMURAWhile ultimate the goal for R&D in robotics is to create robots that are similar to humans, there are several approaches that can be taken to work towards this goal. For example, even if we could create a robot that looks exactly like a human or one that can even communicate like a human, that robot may not be able to accomplish any tasks.
On the other hand, though they may not look anything like humans, there are robots that can do many tasks such as dish washing, and this is one aspect of robotics. My hope is to be involved in increasing the accuracy and breadth of tasks that "working robots" can do to the level of humans.
SAKAIMy university colleagues have moved on to various fields ranging from heavy industry to healthcare, thus AI technology and knowledge can be developed in a wide variety of fields. Globally, the technology for industrial robots is rapidly growing. The progress of research in biology and biological robotics I conducted during university is relatively slow. However, by incorporating machine learning into this field, I believe that we could create new muscle models and even an actuator based on them.
KIMURAAs a team leader, I try to visit various places in the company, and there are times when I join other projects as a member of their project. Because of this, I have many opportunities to meet people of similar status from other research centers and engage in profitable opinion exchanges there. Sometimes we run into issues that other researchers had already tackled in their university years. We have been able to establish give-and-take relationships with other research teams from various centers where we can offer certain software in exchange for things that we might be lacking.
SAKAII often attend external study groups and am able to connect with people who have similar interests to me from other industries. Some people talk about technologies by SNS, and sometimes conversations go places where I end up saying, “If that’s what you’re struggling with, do you want me to help out?” On the flipside, there are people who tell me about interesting technologies that I have yet to hear. Currently, I feel as though I am in a spot where it is easy to think of new ideas as I am able to gather information from outside the company and from the internet.