Skip to main content

Hitachi
Contact InformationContact Information

    Highlight

    Products and services that involve the use of personal data bring benefits such as improved customer convenience, enabling a safer and more secure society, and preventing the spread of infection for lifestyles in the post-COVID-19 era. On the other hand, depending on the acquired data and how it is analyzed, this data could also lead to personal privacy breaches and ethical issues. To address these issues, it is important that a specialized organization participates from the development stage of the product or service by identifying the risks that are likely to occur, and implements countermeasures. This article describes Hitachi’s initiatives on privacy protection and AI ethics as risk management initiatives in its digital businesses.

    Table of contents

    Author introduction

    Tatsuki Yoshizumi

    Tatsuki Yoshizumi

    • Hitachi Security Technical Center, Cyber Security Technology Operations, IoT & Cloud Services Business Division, Services & Platforms Business Unit, Hitachi, Ltd. Current work and research: Planning and management of privacy protection.

    Seiya Kudoh,Ph.D.

    Seiya Kudoh

    • Hitachi Security Technical Center, Cyber Security Technology Operations, IoT & Cloud Services Business Division, Services & Platforms Business Unit, Hitachi, Ltd. Current work and research: Planning and management of privacy protection.

    Fumito Iwashita

    Fumito Iwashita

    • Artificial Intelligence Business Development, Lumada CoE, Services & Platforms Business Unit, Hitachi, Ltd. Current work and research: Process design of AI business framework for internal use.

    Keita Chida

    Keita Chida

    • Smart Infrastructure Consulting Division (2nd), Hitachi Consulting Co., Ltd. Current work and research: Consulting work in privacy protection and data anonymization.

    Fumiaki Satake

    Fumiaki Satake

    • Smart Infrastructure Consulting Division (2nd), Hitachi Consulting Co., Ltd. Current work and research: Consulting work in privacy protection and AI ethics.

    Yasuhiro Miyazawa

    Yasuhiro Miyazawa

    • Hitachi Security Technical Center, Cyber Security Technology Operations, IoT & Cloud Services Business Division, Services & Platforms Business Unit, Hitachi, Ltd. Current work and research: Planning and management of privacy protection.

    Introduction

    Japan is striving to realize a human-centered society called “Society 5.0” which will achieve both economic development and the resolution of social issues through a cyber physical system (CPS) that integrates cyber space and physical space, such as the Internet of Things (IoT), big data, artificial intelligence (AI), and fifth generation (5G) communications.

    To realize Society 5.0, Hitachi is also aiming to create new value by utilizing data in accordance with the Lumada concept, and the scope of this data utilization is continuing to expand. However, while digital businesses involving data usage are growing dramatically, the risks posed by the use of personal data and AI data analysis are becoming more apparent.

    For example, the use of personal data could entail a risk of privacy breaches. In addition, when AI is used to analyze data, ethical risks have emerged due to the characteristics of AI, such as the fact that processing is automated and does not involve human operations, and that the processing becomes somewhat of a black box. It has become increasingly important to address these risks when promoting digital businesses.

    Managing Risks in Data Utilization in Digital Businesses

    Digitalization has led to the emergence of new risks that were previously unforeseen. For example, businesses using personal data are expanding due to the increase in the types and amount of data that can be handled. Personal data is “information about an individual,” which includes personal information and privacy-related information. The Act on the Protection of Personal Information of Japan regulates the handling of personal information, however, there are no clear legal provisions in Japan for privacy-related information. Furthermore, data situated in a gray zone, where the boundary with personal information is ambiguous, is also emerging. Therefore, it can be said that businesses using personal data face privacy risks, in that inadequate consideration may be paid to the privacy of the data subject. Although AI itself has become capable of discriminating and judging as well as being implemented in society, there are ethical risks that AI can cause discrimination against individuals by presenting biased results. In promoting digital businesses, it is necessary to deal with diverse and complex risks such as legal, information security, privacy, and ethics.

    Hitachi has long had internal rules for dealing with each of these risks, and each has had departments overseeing them. In the implementation of digital businesses, it is necessary to address new unforeseen risks such as privacy and AI ethics, however, it is difficult for a single department to address such diverse and complex risks by itself.

    Where digital businesses utilize data in a way that concerns privacy and AI ethics, a specialized organization, in collaboration with various departments, is working as a center of excellence (CoE) to accumulate knowledge and expertise, as well as to provide support for flexible risk response as a risk management initiative for enabling smooth business operations with due consideration to risks. As well as preparing the basic rules and regulations, the CoE accompanies the business unit to provide flexible support for projects that are judged to be high risk. The CoE also reviews its actions when necessary, taking into account the external environment and other factors.

    The following two sections present specific examples of risk management initiatives for Hitachi’s digital businesses.

    Actions for Privacy Protection

    Privacy protection initiatives have been conducted since FY2014 with the aim of preventing personal privacy breaches and other issues arising from businesses that handle personal data, and they have two features: business support by a specialized organization and continuous improvement of governance.

    The business support by a specialized organization is provided by the privacy protection advisory committee (referred to as the “advisory committee” below). Under Hitachi’s privacy protection system, the business units responsible for the development of the product or service mainly implement privacy protection measures, and all businesses that handle personal data, which is the general term for data related to individuals, are subject to examination under these measures. Even if the information is not personal information that can easily identify a specific individual, there is a great deal of information that individuals feel as privacy breaches, and in order to take this into proper consideration, Hitachi’s privacy protection system broadens its scope to personal data. The relationship among the various data is shown in Figure 1.

    The advisory committee is an organization designed to assist business units in implementing privacy protection measures, and is composed of members with advanced knowledge in areas such as personal information management, legal affairs, procurement, and consulting. Also, members of the executive office serve as a direct point of contact with the business units and are responsible for support and other practical matters.

    One type of support is a risk assessment for considering privacy protection measures. As a CoE, the advisory committee has accumulated knowledge by studying cases inside and outside the Hitachi Group as well as domestic and foreign legal systems regarding privacy protection. It conducts risk assessments based on this knowledge. Also, as part of the accumulation of knowledge, the advisory committee also conducts consumer opinion surveys. The most recent survey was conducted in October 2020(1), and it provided insights into attitudes toward the use of personal data to prevent the spread of COVID-19.

    Risk assessments are usually conducted in response to a request from a business unit. After hearing about the personal data acquired by the product/service planned for development and the matter to be analyzed, some members of the advisory committee identify possible risks and provide advice on privacy measures to be taken. They report the details of risk assessment to the meeting of the advisory committee, which is held approximately once a month, and the results of the discussion are fed back to the business unit as necessary based on the discussion by the committee members (see Figure 2).

    Figure 1 — Relationship between Personal Data, Privacy-related Information, and Personal InformationFigure 1 — Relationship between Personal Data, Privacy-related Information, and Personal InformationSince personal information and privacy-related information do not necessarily coincide, Hitachi tries to consider privacy protection in the broader framework of personal data.

    Figure 2 — Establishment of the Privacy Protection Advisory CommitteeFigure 2 — Establishment of the Privacy Protection Advisory CommitteeHitachi believes that an effective approach would be for the privacy protection advisory committee, which has expert knowledge, to use this knowledge to support business units in their risk management.

    Continuous improvement in governance is the responsibility of company executives and the advisory committee. This initiative is being implemented with the active involvement of the Chief Information Officer (CIO), Chief Strategy Officer (CSO), and other executives. The executives hold regular discussions with the advisory committee about the adoption of the system by many business units and the medium- to long-term vision of privacy protection measures, and the advisory committee formulates plans and actions to put the discussed content into practice. Specifically, in addition to the above-mentioned risk assessments for business units, this content includes the planning and implementation of an educational course aimed at improving understanding of the system and boosting awareness of privacy protection. This course is held four times a year to provide more opportunities for people to take the course, and members of the executive office prepare textbooks and serve as instructors. In addition to risk assessments and educational courses, various actions are performed to maintain governance, such as periodic review of the system and maintenance of reference documents, and these specific actions are reviewed based on the knowledge accumulated as a CoE. The advisory committee sets key performance indicators (KPI), self-evaluates the actions themselves, reports the results of the evaluation to company executives on a regular basis, and considers the specific actions for the next and subsequent years.

    This initiative, which entered its eighth year in 2021, is becoming increasingly important to Hitachi against the backdrop of the increasing number of Lumada businesses that handle personal data and the growing social concern about personal privacy. The number of cases where support such as risk assessment was provided to business units has been increasing every year. This initiative was also presented in the “Guidebook on Corporate Governance for Privacy in Digital Transformation (DX)” published by the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry in August 2020, contributing to the social implementation of privacy protection.

    AI Ethics Initiatives

    In February 2021, Hitachi published “Principles guiding the ethical use of AI in Social Innovation Business” (referred to as “AI Ethical Principles” below). In this document, Hitachi has set forth its AI Ethical Principles, which include standards of conduct for the three stages of AI: planning, social implementation, and maintenance and management, as well as seven items common to all of these stages, and has declared that Hitachi will utilize AI based on these principles. To put this into practice, Hitachi is starting an initiative to comply with the AI Ethical Principles from 2021. This initiative, like the privacy protection initiative, has two features: business support by a specialized organization and continuous improvement in governance.

    As business support, an AI ethics professional team was established in the Lumada Data Science Lab. (LDSL), Hitachi’s core organization for AI and analytics, and has conducted risk assessments for work operations that utilize AI, mainly targeting the LDSL since 2021. The team was built as a CoE for AI ethics at Hitachi and consists of data scientists and privacy protection experts.

    The AI ethics professional team has created a check list corresponding to items set forth in the AI Ethical Principles, such as safety, fairness, transparency, explainability, and accountability, so that risks in work operation using AI can be efficiently identified. Also, based on the information in the check list, the team strives to comply with the AI Ethical Principles by discussing and advising about risk countermeasures if necessary. The AI ethics professional team has accumulated knowledge on AI ethics through the preparation of the AI Ethical Principles and its research on domestic and foreign trends and issues, and has a system to continuously enhance this knowledge through risk assessment of AI ethics and support for necessary risk countermeasures.

    The AI ethics professional team is also responsible for establishing and continuously improving the governance of AI ethics. As a specific initiative to build governance, the team prepares documents to ensure that the above-mentioned risk assessments are conducted properly, and it is working to make the risk assessment an in-process part of the existing business review process in the LDSL. At the same time, the AI ethics professional team conducts awareness and education campaigns within the company through training and other means in order to heighten awareness of these processes and understanding of AI ethics. In order to objectively evaluate and improve these AI ethics initiatives, the AI ethics professional team has established an AI Ethics Advisory Board consisting of external experts and takes advice from the board. Hitachi is expanding its knowledge as a CoE through advice from the AI Ethics Advisory Board, and Hitachi is continuously improving its governance by applying this knowledge to the improvement of internal rules and systems, awareness and education campaigns, and risk assessment.

    Through this initiative, Hitachi aims to contribute to the realization of a comfortable and sustainable society based on human dignity and to the improvement of the quality of life (QoL) of people around the world, while further enhancing social, environmental, and economic value through the appropriate use of AI in its Social Innovation Business.

    Conclusions

    This article presented Hitachi’s initiatives in privacy protection and AI ethics as risk management actions for digital businesses.

    These initiatives are led by specialized organizations, and these organizations are similar in that they consolidate knowledge as a CoE to address risks flexibly and quickly and that the specialized organizations are responsible for continuously improving the overall governance of Hitachi. In this era of diversified and rapidly changing business operations and risks, instead of a rule-based approach where rules and regulations are applied and adapted to the business, it is more effective to use a risk-based approach where a specialized organization identifies possible risks and considers countermeasures in light of the nature of each business. At the same time, it is essential that these initiatives are not confined to the specialized organizations, but that they enhance the governance of Hitachi as a whole.

    Looking ahead, Hitachi will continue to work to create added value in digital businesses, including Lumada.