Ask your Machine Learning/AI why it made the decisions it did, to satisfy GDPR and other legislation
XAI (Explainable AI) is needed for trust of the system.
NOTE: Content here are my personal opinions, and not intended to represent any employer (past or present). “PROTIP:” here highlight information I haven’t seen elsewhere on the internet because it is hard-won, little-know but significant facts based on my personal research and experience.
Legal reasons include:
Article 22 (“Right to explanation”) GDPR (General Data Protection Regulation) for EU citizens worldwide states: “The data subject shall have the right to not be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. The data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”
U.S. Equal Credit Opportunity Act of 1974
France - Digital Republic Act of 2016
Current, XAI techniques:
post-hoc (after decision is made) by adding perturbing inputs to identify impact to outputs.
LIME (Local Interpretable Mode-Agnostic Explanations) which is model-agnostic.
Identify the input data that led to the prediction.
RETAIN (Reverse Time Attention) model was developed at the Georgia Institute of Technology to identify which clinical data led to the prediction of heart failure.
Work backwards through the neural net to find the most relevant input values.
LRP (Layer-wise relevance propagtion)