Ethical Considerations and Model Interpretation in AI: Ensuring Trust and Transparency
Table of Contents
RFP: xAI
Context: Change Complexity
Whether humans are directly using machine learning classifiers as tools, or are deploying models within other products, a vital concern remains: if the users do not trust a model or a prediction, they will not use it. It is important to differentiate between two different (but related) definitions of trust: (1) trusting a prediction, i.e. whether a user trusts an individual prediction sufficiently to take some action based on it, and (2) trusting a model, i.e. whether the user trusts a model to behave in reasonable ways if deployed.
There are many steps involved from prepping data, choosing algorithms, building, training, and deploying models … and iterating over and over again.
- Enterprise AI Guide, AWS
However, the harsh reality is that without a reasonable understanding of how machine learning models or the data science pipeline works, real-world projects rarely succeed.
In this paper, we argue that ML systems have a special capacity for incurring technical debt, because they have all of the maintenance problems of traditional code plus an additional set of ML-specific issues. This debt may be difficult to detect because it exists at the system level rather than the code level. Traditional abstractions and boundaries may be subtly corrupted or invalidated by the fact that data influences ML system behavior. Typical methods for paying down code level technical debt are not sufficient to address ML-specific technical debt at the system level.
Quality depends not just on code, but also on data, tuning, regular updates, and retraining.
Second, the idea that you can audit and understand decision-making in existing systems or organisations is true in theory but flawed in practice. It is not at all easy to audit how a decision is taken in a large organisation.
Considerations
- Model Analysis
- ML Pipelines
- Interfaces: Preconditions and Postconditions
- Workflow Management
- Baselines and QA
- Deployments and Versioning
- Causality
Tools
- Seaborn
- LIME https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
- Yellowbrick http://joss.theoj.org/papers/10.21105/joss.01075