Hitachi Review

Toward a Future of Living with AI

AI Ethics for “Trust”

Skip to main content


Hitachi Review


Toward a Future of Living with AI

AI Ethics for “Trust”


In recent years, innovations have been made in various fields in order to cope with the social changes and challenges that have emerged on a global scale, such as the pandemic caused by COVID-19 and climate change. Among these innovations, the use of AI is expected to bring about major breakthroughs. How will our world change as rapidly evolving AI spreads to every corner of our society and lives? What are the essential ethics, and approaches based on them, required to reduce the risks posed by AI and to utilize it while maintaining safety and security? Kay Firth-Butterfield, head of the AI division of the World Economic Forum, and Norihiro Suzuki, General Manager of the Research & Development Group of Hitachi, Ltd. talk about how to strengthen governance from the perspective of introducing new technologies under the keyword of “trust.”

Table of contents

Increasing Expectations for Breakthroughs in AI

Kay Firth-Butterfield Kay Firth-Butterfield
Head of Artificial Intelligence, World Economic Forum
One of the foremost experts in the world on the governance of AI, a barrister, former judge, and professor, Kay serves on the Lord Chief Justice’s Advisory Panel on AI and Law and is a co-founder of AI Global. She serves on the Advisory Board for United Nations Educational, Scientific and Cultural Organization (UNESCO) International Research Centre on AI and AI4All. She has advanced degrees in law and international relations and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic, and social changes arising from the use of AI. And, she has been featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.

These days, the use of artificial intelligence (AI) is becoming more widespread in society against a backdrop of global uncertainty and societal challenges such as COVID-19 and climate change and other challenges. What are your expectations for expanding the use of AI?

Firth-ButterfieldI think my expectations of AI are, if we deploy it responsibly, we are going to be able to tackle some of the world’s major challenges with it. But what we are seeing is that increasing numbers of people are actually quite worried about AI. A European parliament study showed that 87% of people were actually quite afraid of AI in Europe, and a similar study done in the USA showed that the figure was likely flipped, just 78%. But that is still a large number of people that we need to take with us on our AI journey. And I think that we are seeing that the uncertainty around governance is actually playing out in the number of companies who are deploying it in more than one vertical. I do not think we are seeing the up-take of AI that we perhaps had expected to see. There are a number of issues around that we could discuss. But also, I think I often find myself saying, AI is not a magic wand. We should not think that we are going to be able to solve all of our problems with AI and therefore stop looking for alternative solutions.

SuzukiHitachi’s Social Innovation Business aims to improve three values: social values, environment value, and economic value to realize Society 5.0 to raise people’s quality of life (QoL) and the corporate value of our customers. Hitachi is using digital technology to transform society and industry into more intelligent systems through open innovations with customers and partners in both the operational technology (OT) and IT domains. In recent years, extremely large amounts of data have become available using AI to support human decision-making. It also makes it possible to transform business and society. As a result, the importance of AI as the source of innovation has been increasing in recent years.

AI is a core technology of Lumada, Hitachi’s platform, which is essential for collaborative creation to resolve challenges in society. The industrial application of AI is expanding to a wide range of fields such as operation management in the railway sector, optimization of power transmission and distribution planning in the energy sector, updating equipment maintenance in the medical and manufacturing sectors, loan assessment in the financial sector, and so on. In fact, we have worked on over several hundred projects using AI to resolve issues. The issues that society needs to address such as environmental issues, improving the resilience of society, and enhancing QoL in the face of diversifying values are becoming more sophisticated and complex. AI will play a significant role in solving such challenges facing society. In the future, AI will be used in a wide range of areas of society. In particular, Hitachi sees the environment, resilience, and safety and security as our business domains, and AI will be a driver of change for business and society in these domains as well.

Risks Posed by AI, and the Establishment of a New Ethics

鈴木 教洋 Norihiro Suzuki
Vice President and Executive Officer, CTO, General Manager of Research & Development Group, and General Manager of Corporate Venturing Office, Hitachi, Ltd.
Joined Hitachi in 1986 after graduating with a master’s degree from the School of Engineering at the University of Tokyo. After working on research and development in fields such as digital image processing and embedded systems, he was appointed Senior Vice President and CTO of Hitachi America, Ltd. in 2012, General Manager of the Central Research Laboratory in 2014, and General Manager of the Global Center for Social Innovation in the Research & Development Group in 2015. He was appointed to his current position in 2016. Ph.D. in Engineering. He is a member of the Institute of Image Information and Television Engineers and the Institute of Electronics, Information and Communication Engineers, and a senior member of IEEE.

There are a lot of expectations for AI, but on the other hand, the negative aspects of AI are also coming into focus. Do you have any thoughts about it?

Firth-ButterfieldI think it was Japan that was actually the first country to come up with AI ethical principles and, since then, we have seen countries and companies all around the world following that. What we do know is that about ten of them are common to all countries. In other words, I think that these 10 items are extremely important from the perspective of the responsible deployment and beneficial and fair use of AI. We think diversity and inclusion need to be considered when developing algorithms, and as you mentioned, robustness, safety, and security are also important.

AI is going to be everywhere. As I often say, every company will have to be an AI company to sustain itself.

In recent years, one of the areas in which we at the World Economic Forum have been using AI is to support older adults, and it is very important for them that we ensure AI is safe and protects their dignity as human beings. We also have been working on the “Smart Toy Awards” to approach the governance of the use of AI for children, who are also socially vulnerable. I think we need to utilize the benefits that AI brings, while solving the problems it poses.

SuzukiHitachi wants to be an AI company. Many of Hitachi’s Social Innovation Businesses are related to societal infrastructure and public services. The abnormal behavior of AI or malicious acts from the outside can have a serious impact on society as a whole, and on human lives. In addition, if AI develops incorrectly, it could promote discrimination, prejudice, and disparity. To realize the fair and trustworthy society it is essential to correctly understand how AI behaves and the risk it entails.

Historically Hitachi has a long association with social infrastructure and public services. Even before AI began to be widely used, we were highly sensitive to the impact of technology on society and have been providing education on ethics for engineers. However, with new technologies such as AI and machine learning, it will be necessary to understand and adapt to their characteristics. The speed of evolution and application of AI is very fast and speed of change in society will also increase. In fact, we are beginning to see examples of drastic changes in the nature of society such as automated trading in finance and the use of social media to influence public opinion. So we need to think about the development and application of AI to ensure that society does not move in the wrong direction.

In order to reduce the risk of AI, what approaches are needed to ensure its safe and secure use?

Firth-ButterfieldAt the World Economic Forum, we tend to try to make sure that we create those approaches in a multi-stakeholder way. We have to help governments spread accurate awareness of the merits and demerits of utilizing AI, as well as its safe and secure uses.

Deploying AI in your company is a company-wide effort. So you have to think about your organizational structure to deploy AI successfully and wisely. You have to begin to think about any products that you are actually using AI in. In AI research, you have to think about how to create diverse teams around that, and about how those products actually are going to be used. You also need education and training around AI for your employees to make them understand that it is not taking their jobs, but helping them do their jobs.

With the nonprofit sector, we are increasingly seeing AI for good, and foundations putting large amounts of money into how well we could use AI in education or in healthcare. We have been doing some work with India where we have been trying to utilize a chatbot to provide triage in a country where there are 27,000 people for every one doctor. You must deploy chatbots responsibly, so we actually brought medical ethics people together with AI ethics people to create a framework and I am pleased to say that that is now being used in countries with a shortage of doctors.

SuzukiVarious government research institutions and other organizations have issued guidelines. At Hitachi, referring to these guidelines, we are working on initiatives for AI ethics and governance in various ways. We have firstly dialogue and consensus-building with stakeholders in society. Secondly, educational personnel to develop and use AI correctly. Thirdly, some internal mechanisms to analyze, evaluate, and manage risks in AI-related research and business applications, and quality assurance of AI-based services and products. And lastly, based on Hitachi’s basic philosophy, its mission to contribute to society through the development of superior, original technology and products. Hitachi believes that, in the research and development of AI as a processing and management technology, ensuring AI ethics is of great importance, and is actively pursuing this.

For this, it is necessary to establish corporate policies related to AI development and utilization, which are the basis for all of this and to ensure awareness throughout the company. So Hitachi established its own guiding principles for the ethical use of AI that are unique to Hitachi’s Social Innovation Business and have made them available to the public. It is characterized by setting out standards of conduct for the three stages of planning, societal implementation, and maintenance and management with a view to applying AI to societal and industrial infrastructure.

In terms of education, a core organization using AI and analytics to accelerate digital innovation by Lumada, the Lumada Data Science Laboratory, was established to gather top data science talent to sharpen knowledge and skill in AI and data science. Hitachi is now training 3,000 data scientists, and also conducting in-house education on AI ethics in the form of discussion group and classroom lectures for a wide range of positions and departments. With regard to risk management, a checklist for risk assessment was formulated in response to Hitachi’s principles guiding the ethical use of AI. Risk management using checklists is conducted when starting research at the time of customer acceptance and each phase of a proof of concept (PoC). We also established an AI Ethics Advisory Board consisting of external experts to provide advice. Guidelines were also established for development and certification based on the unique characteristics of AI.

Governance that Instills “Trust” in New Technologies

It is becoming a common understanding around the world that technology governance needs to be strengthened in order to implement new technology in society. What kind of initiatives become necessary for this?

Firth-ButterfieldDifferent areas of the world are approaching this in different ways. For example, in Europe they have just issued the AI Act, putative regulations for AI based upon a risk basis. One example of a high-risk activity in the AI Act is the use of facial recognition. The use of AI to enable gas to flow more appropriately, that would not be a high-risk activity. There is a whole system burgeoning out there to address utilization of AI depending on different risks. I have always been concerned about legislation for AI. Deployment of AI across different sectors is moving very fast, however, it takes a long time for governments to legislate. I think the Europeans have done it very sensibly by creating categories and then thinking about certification. Obviously, in other countries, we are not seeing anything like legislation and we are taking much more of a laissez-faire approach. And yet even in the USA, where the mantra that governance and legislation impedes innovation, we are beginning to see the start of people thinking about legislation. About procurements throughout the federal government so the AI caucus of the Senate is actually just starting to talk to us about the procurement of AI. We have been talking with the Department of Defense about their procurement of AI strategy. And the Equal Opportunities Commission in the USA is also looking very carefully at the deployment of AI in talent and human resources management.

SuzukiI believe it is correct to view technology and governance as two wheels of a vehicle. When the governance system is in place, trust in technology is created and technology can be used with peace of mind. I believe that the proper consideration of governance from the design stage will accelerate the application of technology and innovation to society.

In terms of technology governance, we need to think about not only privacy, safety and security, but also transparency, inclusiveness, and accountability. Also, AI technology such as machine learning learns from data, so we need to be careful and ensure that there is no lack of fairness due to data bias. Under Hitachi’s principles guiding the ethical use of AI, seven points are listed: (1) safety; (2) privacy; (3) fairness, equality, and prevention of discrimination; (4) proper and responsible development and use; (5) transparency, explainability, and accountability; (6) security; and (7) compliance. To build technology governance, various perspectives from multiple stakeholders are necessary. To maintain governance, it is important to be flexible and agile enough to constantly review it. And it is necessary to incorporate such things into the corporate framework.

Firth-ButterfieldOne thing I am seeing is the sort of environment, society, and governance (ESG) mechanisms actually including AI ethics that we are starting to work on. Because we have had investors or venture capital (VC) companies say, “What should we look for when we want to invest in an ethical AI company or startups?” I think that governance is a big frame that we all have to think about in our different ways, but I do see those as pressures, particularly for investors, on public companies in the future.

Now trust is again attracting attention as a mechanism to make a society function smoothly. What do you think about trust in technology?

SuzukiIn April 2021, I joined the Global Technology Governance Summit. I talked about trust in technology as a panelist. The Fourth Industrial Revolution, especially the introduction of digital technology, has dramatically changed society. Unfortunately, uncertainty has increased, and the manner in which we built up trust in the past is becoming dysfunctional, leading us to seek new ways to build trust. The digital transformation (DX) of recent years has been found more rapid than in the past. The reliability of technology is becoming increasingly relevant to the achievement of the Sustainable Development Goals (SDGs), including human rights and the environment. For Hitachi, ensuring the reliability of technology is the heart of our business. In collaboration with the World Economic Forum Centre for the Fourth Industrial Revolution (C4IR) and the Ministry of Economy, Trade and Industry (METI), Hitachi has published a white paper on a trust governance framework. The purpose of it is to provide a basis for discussion on a new way of governance to build trust. The white paper proposes that trust-building requires accumulation of objective facts and evidence that technology is trustworthy, making them available for anyone to verify and expanding public awareness.

As a tech company, Hitachi is striving not only to develop and use AI correctly, but is also trying to build trust in Hitachi’s AI by publishing its guiding principles and disclosing the facts that AI is implemented according to these ethical guidelines.

Firth-ButterfieldTrust in technology is a major issue wherever you live in the world. And we are seeing that technology is often driving people further apart from one another, because they are trusting technology. We want people to trust technology that is actually trustworthy. We do not want them to trust some of the things that are said on social media, because actually some of that is spreading disinformation. It is hard for us to have conversations with the general public about technology.

I think what we are all struggling with in terms of trust is that people tend to automatically trust computers, or machines. We are not having a good enough conversation about technology to actually unseat some of the theories that people have built up about AI. If a judge in a bail hearing decides to trust what the computer is telling him or her, or if a bank that is deciding whether to provide a loan or not based on an algorithm, if the algorithm has been designed in a trustworthy fashion, there will be no problems. What is important to achieve this is an AI ethics that can be trusted. Seeing that the companies that design AI actually are doing it with that ethical lens and are making trustworthy the things that we use, will go an enormous way.

Kay Firth-Butterfield

鈴木 教洋

The Key Is Co-creation with Stakeholders

AI ethics and technology governance is not something that is contained in and relates to one organization alone. We believe collaboration within various stakeholders is very important. Please share your thoughts on this.

SuzukiThe speed of the technological evolution is so fast that the traditional way of governance, where the government establishes rules and we have to follow them, is no longer sufficient. It is necessary for various stakeholders to gather their knowledge, constantly check whether governance is working or not, and keep updating governance to make it more effective. There is no doubt that tech companies such as Hitachi will play an important role. However, it is difficult for tech companies to build trust alone. It is extremely important that government agencies, third-party organizations with neutral academic and scientific perspectives, non-expert citizens, and their collective communities also participate in governance.

Hitachi will continue to participate in the World Economic Forum activities through regular meetings and workshops and pursue collaboration between multiple stakeholders through international cooperation.

Firth-ButterfieldWe are thinking in the same way. One of the reasons that I went to the World Economic Forum to set up the AI team at C4IR back in 2017 was that there was nowhere that I was seeing multi-stakeholder collaboration in a way that the forum could do it. Because we have the pleasure of business partners like yourselves, we have a lot of ongoing relationships with governments, academia, and the civil society sector. That is so important for us to be able to have on paper and then disperse around the world for companies that are not as far on the journey.

I would say, you need to adopt the responsible use of technology work that we have been doing, and to create a framework, for example, a chief AI ethics officer position. There might be many people in the C-suite who really think about this as an important matter to solve in the process of developing a company into an AI company. I believe that you actually do have some technologists on the board. I think everything that we have discussed today is just the sort of thing that I would want to see from a for-profit company like yourselves. And one of the things that we talk about at the forum is lighthouse factories, advanced factories that adopt leading-edge technologies including AI. I think maybe I should take away an idea for lighthouse work on ethical AI.

SuzukiI too am keenly aware of the strong need for responsible AI. So, as a company engaged in Social Innovation Business, Hitachi is committed to providing the world with responsible AI in terms of human resources, systems, and technology and also to building a better future society. The goal is to create a world where people trust AI, and society is able to entrust AI with a wide variety of tasks. If AI is developed and utilized correctly by entrusting AI, it may be possible for society to become more transparent and fairer. The AI demanded by society will be different depending on usage and domain. In some cases, we may rely heavily on AI, and in other situations it may not be left up to AI only, but include humans in the loop. We will be able to increase the value of AI by continuing to hold onto the vision of humans and AI coexisting. Thank you for your time today.

Download Adobe Reader
In order to read a PDF file, you need to have Adobe® Reader® installed in your computer.