Data Science

Black Box Deep Learning Models Need to Explain Themselves for AI to Work for Policymakers


Deep learning models operate inside a black box that hides the inner workings of their algorithms from users and sometimes their creators. This is an issue when policymakers or business executive need to explain how the AI arrived at the recommendation or decision it made. 

The systems can contain hundreds of millions of parameters, which makes them effective and difficult to understand, said Raj Minhas, vice president and director of interaction and analytics laboratory at PARC, during a keynote at the AI World Government conference held recently in Washington, DC.

Raj Minhas, VP and director of interaction and analytics lab at PARC

“It’s great when it works. But when it doesn’t work, it’s completely inscrutable,” said Minhas, in an account in SearchEnterpriseAI.

PARC, a Xerox company, is an R&D company based in Palo Alto, with a rich history in the computer industry. Among its projects is an effort to make AI explainable, which will be necessary for adoption in regulated industries such as healthcare and finance.

An AI system that might be understood by a data scientist might be too complex for a business user to understand, said Lindsey Sheppard, associate fellow at the International Security Program at the Center for Strategic and International Studies in Washington, DC, in another session at the conference.

“No one size fits all,” she said, adding,.”What is the appropriate level of trust that has to be met, or the appropriate level of understanding that has to be met across your organization?” 

Read the source article in SearchEnterpriseAI.


Source link

Guest Blogger

We feature multiple guest blogger from around the digital world. If you are featured here, don't be surprised, you are a our knowledge star. :)

Related Articles

Back to top button