Skip to main content

Research & Development

Artificial intelligence (AI) continues to expand its presence in our day-to-day lives, but how has COVID-19 affected AI’s applications? And how can AI technology help address the kinds of challenges presented by the pandemic?

A panel discussion was organized at the Hitachi Social Innovation Forum 2021 - India Online on AI in post–COVID-19 India and experts in this field like Ms. Anna Roy, Ms. Vijaya Deepti and Professor Arnab Laha participated in the discussion. Ms. Roy is a Senior Advisor to the National Institution for Transforming India (NITI) Aayog. Ms. Deepti is the CEO of Tata Insights and Quants (iQ) and a Board Member for Tata AIG. And Professor Laha is the Professor of Production and Quantitative Methods in the Indian Institute of Management (IIM) Ahmedabad. Kingshuk Banerjee, Senior Vice President of Hitachi India, R&D, moderated the discussion.

(Published 26 April 2021)

(From top left clockwise: Kingshuk Banerjee, Ms. Anna Roy, Ms. Vijaya Deepti, and Professor Arnab Laha)

Artificial Intelligence: What does it mean for the human society?

Professor Laha, in layman's terms, would you please share how you see AI and what it brings to society?

Over the last decade or so, we have seen a lot of hype around AI. I say “hype” because people generally start thinking about machines with a capability for general intelligence. However, we're still some distance from that. Our current systems are not capable of emotions, or loving, or any of the things that we associate with general intelligence. However, AI has been one of the greatest advancements for humankind since the development of nuclear energy.

This technology has certain characteristics. For example, it can mimic human perception and can be used in face recognition, which we now use in our phones, and voice recognition, which also has interesting applications. We have seen machines that are able to write paragraphs and compose prose, and sometimes it is very difficult to tell whether it was generated by a machine or a human.

Even back in the 1960s, we had systems capable of automated decision-making. With the growth of computing power, today we see full-fledged systems that can do things like diagnostic imaging. You can give an X-ray to a properly trained machine (AI system), and it can figure out the abnormalities. These techniques have applications in various disciplines such as medicine, education, industrial automation and many other areas.

Another important capability is object identification in images. There are various applications in industry, particularly in determining whether a product is good and will pass standard inspections. That's important because if we can determine that a product is defective early in the process, it will be much easier to remedy. We can save a lot of money by weeding out those defective items from the process.

Also, we have seen advancements in navigation and forecasting. India, for example, has been able to weather large storms much better than it could even a decade earlier, largely because of the new information that we can get from multiple sources and the enhanced capability of processing these information to obtain better forecasts. AI has dramatically revolutionized navigation through GPS. Any of us who have used Google Maps knows how effective it is to have not only the route suggestion, but also the likely time that it will take to reach the destination. All these predictive AI capabilities have been of enormous benefit to us.

A challenge in the social sector is the circulation of news items that are not true — generally referred to as “fake news.” One of the challenges for an AI system would be to figure out which news items are accurate, and which are fake. And it is important that the fake news is weeded out before it causes damage. In a fragile society, fake news can create social unrest of a kind that we may not want.

The benefits of interactive AI systems have led it to its wide applicability. One example is Chatbot, which has been widely adopted by various organizations to interface seamlessly with their customers.

There also have been interesting developments in academic research, like testing a molecule for toxicity. Previously, you required extensive laboratory tests to do that. Now, many molecules can be weeded out through automatic reasoning — that they are likely to be poisonous or lethal to humans — and no actual laboratory test is required. That brings down the cost of drug development substantially.

All of these things are features of different AI systems that are currently available. And we possibly will see the integration of these various kinds of systems to build large overarching systems, possibly in just a few years of time from now.

Post–COVID-19 AI trends and use cases

Ms. Deepti, what are some of the post-COVID patterns and Use Cases that you are coming across in the adoption of AI in the industry?

One of the things we’ve looked at is our ability to comply with procedures to ensure that workers are safe at the factories. We started out trying to do contact tracing to understand who our employees were, who they were coming into contact with and the types of exposures that they had.

The next challenge was making sure that workers are protected when they come into the factories — that they are wearing their PPE, for example. For that, we started using computer vision to see whether they were wearing vests, gloves, goggles, helmets and so on. That was an extension of some of the capabilities that we had built up for security at sites. We extended them into the post-COVID area and started using them for health and safety at the factories.

The other challenge was social distancing. We started using computer vision models for identifying how close people were, being able to raise alerts very quickly, and so on. In our factories, a lot of work was done to make sure that we could use computer vision for managing the safety of the workers on-site.

Beyond this, another challenge that organizations have been grappling with is understanding supply chains and what impacts this is likely to have. It is something the group has been looking at as a whole. Professor Laha mentioned risk management and the management processes around that. There, too, a lot of work has been happening, such as forecasting where hot spots are likely to be, and if a black swan event occurs, what the likely impact of the event will be.

Forecasting has been the bread and butter for the procurement department. They are now deepening and enriching their models by looking not just at the structured data but also at some of the unstructured data that is coming through. A lot of work has been happening with natural language processing, digital analytics and the types of news feeds that are coming from specific sources.

What’s next in AI?

Professor Laha, what is the next in AI? What are some of the research areas in AI that are getting a lot of attention today?

It is often seen that science progresses when we attempt to solve problems that are encountered by society. One of the current problems the AI technology is facing is that of explainability. Even when an AI system is most useful it is still a black box. So, I think a lot of research will be directed toward making some of these systems more explainable. Explainable statistical models have been around for a long time, but the newer systems, the more complicated neural network–based systems, do not have a direct way of being explained. We need innovative research in that space.

Another area where I'm looking for a lot of research to happen is on the use of image processing and audio signal processing. Looking at how humans communicate — we see (image processing), we hear (audio signal processing) and we speak (Natural Language text generation) - I expect a lot of exciting work to happen at the interface of these fields. For example, can we use a camera to do the work that we're doing with our eyes?

If we can use a camera to detect anomalies, then we can detect situations when we need to act. This can be useful in prevention of unwanted events such as violation of physical distancing norms. If an AI system can warn a person that going closer than a point poses a health danger, the person may not go closer. Proactively preventing such situation using image processing can have huge benefits. I look forward to seeing a lot of industrial and social applications of the new capabilities for analyzing images and video.

Another area is audio signal processing. For example, I came across a recent document from an organization that has tried to use audio signal processing for predictive maintenance of their machines. Basically, the idea is that they generate audio signals from a machine when it is running fine. Then they can analyze the sound coming from the machine to determine whether the machine needs maintenance. When the sound emanating from the machine doesn’t match those generated when it was running fine, it causes an alarm to be raised.

In a post-COVID world, I look forward to more focus on the use of AI in disease surveillance. It is sad that, despite warnings, we missed the possibility of a large pandemic such as the current COVID-19 pandemic happening. So, I think there'll be a lot of emphasis on using AI in disease surveillance in the coming years, because we cannot afford to have such a major disruption very often. More generally, if we want to provide a billion Indians with good quality healthcare, we need AI based systems to be deployed widely.

There are two other areas where I think AI will get great traction. One is autonomous cars, particularly in developing countries like India, because many people are killed/ severely injured in vehicle accidents. Autonomous cars have great potential to eliminate a large proportion of these accidents.

The second area is education. We know students are not alike. Students learn in different ways, at different times and from different inputs. All this is known, but it is difficult to bring it all together. AI systems gives us a good opportunity to make that happen.

Encouraging confidence in AI: Measuring its security and accuracy

Ms. Deepti, how do we measure the efficiency and accuracy of AI systems? How do we convince people to adopt AI more in their daily lives?

This is one of the toughest problems that AI faces. For example, three or four years ago, we started talking with clients about AI and using it in models for improving product performance or predicting the likelihood of failure. The challenge back to us was “How do you prove this? How do you make sure that your model detects all likely failures? We can't afford a failure.” We showed them what the results were from the different statistical models and how things could be. It took us about a year or a year and a half to gain their confidence to start looking at AI as a capability.

The other challenge for us was getting buy-in from operators. They would say, “Look, we have been at this plant for 30 years. From the noise, we know what we should be adding — coal, lime or whatever.” That is the type of feel they have for what's happening in their plants. But our models can't be that sensitive to sound. So, for us it had to be simulate and simulate and simulate and simulate. And the simulations helped us gain their confidence.

We're very careful to tell people that when the models give lower accuracy, they should not be used alone for corrections, but for looking at the range of actions people can take. That enables a more informed decision. Confidence has been built up step by step and because we come in with an open mindset, saying that AI is not going to solve all problems, but it will help you perform better.

We also can reduce injuries and fatalities at the site if we can predict the likelihood of failures. Identifying problems faster and giving operators time to react is important because a failure can be very costly, not just in terms of the machinery but also in terms of the operators who are there. Health, wellness and safety is the other driver encouraging companies to adopt AI.

How is AI ethics being approached in India today?

Ms. Roy, what is your take on AI ethics? What are some of the important movements happening today?

That is one of the most important questions we face today. At NITI Aayog, we have emphasized from the beginning that the national strategy must acknowledge the potential downsides of AI, such as bias and privacy issues, among other things. However, we have also emphasized that we should find the solutions to overcome these issues through use of technology.

At that time, many big tech companies like Google were already undertaking this kind of research. For example, Google was working on addressing bias in models. Our recommendation was to work with the technology because AI was an evolving technology, and too many restrictions at such an early stage would stifle innovation and not allow us to reap the benefits.

In hindsight, I think that was a good approach because today we see more and more technology coming to the fore. NITI Aayog has been piloting many of these technologies, and we have released a consent-based data sharing architecture on our website. We regularly come out with research papers seeking comment. That is a good way of increasing the government’s chances of developing policies that are relevant to the needs of the time.

The second thing I’ll stress is policy around responsibility. Last year we came out with the first two parts of an ongoing series on Responsible AI.* The first part focused on the guidelines that should be followed in any ecosystem development by AI adopters and the research community. The second part was on proposed enforcement mechanisms.

The series is being done by way of an on-going dialogue with the stakeholders, and I'm happy to note that we have completed the entire consultation on part one, including an inter-ministerial consultation. The enforcement part is not yet complete because, while there is little disagreement on the principles, there are certain divergent views. We need to strike a proper balance, so we're still working to complete the stakeholder consultation on that.

So, the policy framework is in the making. People can join in by sharing their comments so the policy can evolve over time. This is not a one-time kind of exercise, because the technology itself is emerging, and the debate around responsible AI will also emerge with time.

*NITI Aayog: Reports. Responsible AI: Approach document for India, NITI Aayog, February 2021(PDF) .

You make an important point about striking a balance. On one hand, we need data privacy, standards, and frameworks, and on the other, we should not stifle the spirit of innovation.

AI technology has provided the means to address some of the important challenges posed by the pandemic. In India, AI is offering ways to improve workplace safety, provision of healthcare, future capabilities for disease surveillance, and other capabilities related to COVID-19 and other important social and economic needs. Industry also is working to establish trust in AI, and the government and other stakeholders are developing standards for proper use of the technology and enforcement mechanisms. We shall continue to share developments, the inventions, and innovations, during these unprecedented times for the human society

*If you would like to find out more about activities at Hitachi India Research & Development, please visit our website.


(As at the time of publication)


CEO of Tata Insights and Quants (iQ), Board Member of TATA AIG

Ms. Deepti is the CEO of Tata Insights and Quants, a Division of the Tata Industries Ltd. The Organization is focused on working with the Tata Group Companies in leveraging Advanced Analytics to address business challenges. She is a Member – Board of Directors at Tata AIG General Insurance Co. Ltd. Prior to this, she was with Tata Consultancy Services (TCS) for over three decades and played several roles both at Corporate and Business Unit Level.

Anna ROY

Senior Adviser, NITI Aayog

Ms. Roy heads the Emerging Technology vertical in NITI Aayog and has led teams to come out with several important policy initiatives in this area.

Arnab LAHA, Ph.D.

Associate Professor at IIM - Ahmedabad | Business Analytics, Quality Management and Risk Management

Prof. Arnab K. Laha is a member of the faculty of the Indian Institute of Management Ahmedabad. He takes a keen interest in understanding how analytics, machine learning, and artificial intelligence can be leveraged to solve complex problems of business and society. His areas of research and teaching interest include Advanced Data Analytics, Quality Management, and Risk Modelling. He has published more than 25 papers in peer-reviewed reputed national and international journals and his two book volumes have been published by renowned international publishers.

He is currently an Associate Editor of a leading journal of the American Statistical Association. He has been named as one of the "20 Most Prominent Analytics and Data Science Academicians in India" by Analytics India Magazine in 2018. He is a member of the governing council / advisory boards of several well-known organizations. He has conducted a large number of executive education programmes and undertaken consultancy work in the fields of business analytics, quality management, and risk management for organizations.

Kingshuk BANERJEE, Ph.D.

Senior Vice President, Research & Development Centre Hitachi India Pvt. Ltd.

He has been the Ex-Partner and Service Line Leader for Cognitive Computing at IBM GBS worldwide. Furthermore, he has advised Rabobank, Resona, Barclays and JPMC on digital transformation. He architected Watson Wealth Management at DBS Singapore. He has a Ph.D. in Engineering Management from George Washington University, and an Executive Leadership certification by Harvard and Cornell.