Data Science

A Chief AI Officer on Explainable AI: Addressing Trust, Utility, Liability


Dimitry Fisher is Chief AI Officer at Analytics Ventures, a venture fund, and Dynam.ai, a provider of tailored, end-to-end AI solutions. He has over 20 years of R&D experience in academia, government, and private sectors, spanning multiple areas of physics, neuroscience, ML and AI. He has 12 granted US patents and over 40 publications in scientific journals. He submitted this contribution in the form of a Q&A. — The Editors

What is explainable AI?

AI is intelligence demonstrated by machines. That is, AI-enabled machines are capable of making decisions (taking actions, providing answers) at near-human, human, or superhuman levels. However, that doesn’t mean a priori that machines can explain or elucidate their decisions. Explainable AI is ability of machines to explain their decisions to humans, or more generally to provide insights into the machine decision-making process: what input data features were most important for reaching the decision, what other options were considered and why they were rejected, and so on. This is in contrast to the standard (so-called “black-box”) approach where the AI is used as-is, without knowing why or how it does what it does.

Dimitry Fisher, Chief AI Officer of Analytics Ventures and Dynam.ai

Why is explainable AI important, especially in business contexts?

The three key issues addressed by Explainable AI are trust, utility, and liability.

Let’s start with trust. Any sufficiently complex algorithm or device only works until it doesn’t. When a standard, black-box AI makes a wrong decision, there is no way to find out why, or to mitigate that. For AI this problem is more acute than for humans. Indeed, adult humans have decades of learning and aeons of evolution behind them, so humans know without thinking what the fundamental mechanics and the social norms of this world are. This makes humans less likely to make catastrophically incorrect decisions when they encounter something new. Not so for AI: all the AI knows is what was hard-coded into it and/or what it had learned from its training dataset. In most cases, when AI encounters something it hasn’t encountered before, there’s no telling what it’s going to decide.

This is known as the “generalization problem”, and graceful generalization is a bit of a Holy Grail of AI. Explainable AI is a substantial step in that direction, as it elucidates the “reasoning” process behind the AI decisions. There might be a situation that we believe that AI made a wrong decision, like what happened in the special move in AlphaGo competition, that later proved to be an extraordinary intelligent move that was out of human comprehension. So in some cases even it will be impossible for human to judge an AI-made decision. Explainable AI, If possible, will increase the trust in such decisions by providing insight into the process that resulted in the decision.

The second aspect is utility. When a member of a team proposes a solution, others can discuss with her or him the details, the improvements, the caveats, etc. Ideally, an AI would have the same ability. Obviously, we’re not there yet, but wouldn’t it be nice to ask (or, better yet, to discuss with) a recommender algorithm why it recommends what it recommends? Simply put, an AI has to be a member of the team. An AI that can explain its reasoning can teach its human colleagues new insights. An AI that can provide insights into how its decisions come about may be trusted. More generally, what you want for your business is an AI that can be managed, that can be integrated as a productive member of the team, that you can both teach and learn from. What you don’t want is a black box that cannot be communicated with.

The third aspect is liability. Under EU GDPR law, paragraph 71, “The data subject should have the right not to be subject to a decision … and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.” In other words, when a human is a subject to a decision made by an AI, the human has a right to receive an explanation, to challenge the decision, as well as to be guaranteed protection from profiling as granted by the EU laws. Equally important, in a tragic event when AI of a self-driving car or passenger plane makes a mistake, there ought to be a way to communicate with it before it is too late, or at least to understand explicitly why it made a mistake so that it doesn’t repeat. AIs offer a clear advantage over humans in this respect, as (1) they don’t lie, (2) they don’t drink and drive, (3) they often have far lower response latencies than humans, and (4) they are more likely to survive the impact, to provide the evidence later on.

What are some potential business use cases for explainable AI applications/systems?

Explainable AI systems shine whenever human and AI experts work in close collaboration: advertising, finance, anomaly detection, fraud detection, etc.. More generally, explainable AI has a potential to speed up AI systems development and deployment, and to improve their performance, across the board. For example, Explainable AI allows developers to detect and rectify idiosyncratic “superstitions” or false associations that AI may acquire from limited or biased training datasets. Consider an anomaly detection Explainable AI. It can tell you what, where, and when it thinks the anomaly is, and what features contributed most to the decision. You, in turn, can tell it whether it was right or wrong, what is important and what is not, what to pay attention to and what to ignore, what it got right and what it missed.

You can even learn from it some interesting insights that haven’t occurred to you before. As I already said, when AI can learn from humans and humans can learn from AI, both win. A specific example may be helpful. AI is finding wide adoption in the medical imaging space. One of the major problems in mammography is the relatively large number of unnecessary biopsies that are conducted, where only 1 out of 5 biopsies turn out to be cancer.

Algorithms have become more accurate than physicians in this application and some studies have shown that the number of unnecessary biopsies can be reduced by 60% using AI. In such an application, it would be extremely helpful if the model was explainable so that radiologists can understand and in turn learn the specific features that the algorithm is keying on in order to make its decisions. Such an understanding would open up a new and quantitative channel of learning for radiologists, whereas currently the training of radiologists is based on heuristic methods in handbooks.

Learn more at Analytics Ventures and Dynam.ai.


Source link

Guest Blogger

We feature multiple guest blogger from around the digital world. If you are featured here, don't be surprised, you are a our knowledge star. :)

Related Articles

Back to top button
Close
Close