Generate Interpretation
Synopsis
This operator allows you to use interpretation algorithm to explain your predictions
Description
There are several algorithms around which help you to understand which properties of a given example was most 'impactful' for the model to decide how it decided. This operator allows you to use Explain Predictions, LIME (kernel)SHAP and Shapley. For more information about the algorithms we recommend Interpretable Machine Learning by Christoph Molnar (https://christophm.github.io/interpretable-ml-book/).
Input
mod
The model whose predictions are to be explained.
training
The training data.
testing
The testing data is the data you want to get an interpretation for.
Output
example set
The input data set with an additional column 'interpretation' giving you the interpretation.
importance
A data set with the detailed interpretations for each row.
global weights
The global weights of the model. This is the average of the absolute local weights.
mod
The original model passed through.
Parameters
Algorithm
The algorithm which is used to interpret your prediction
Sample size
Most algorithm use some sample size generation internally. For LIME and Explain Predictions this is the number of random sample drawn. In case of Shapely and kernelSHAP this is the number of permutations.
Redraw local samples
Only available for LIME. If checked the operator will draw new random samples in each iteration. If deactivated a set of examples is generated once and used for the interpretation of each example.
Explanation algorithm
Only available LIME. Defines what algorithm is used on the local neighbourhood to generate your interpretation. The original paper uses Linear Regression models. Explain Prediction uses (besides some other adoption) a correlation measure to determine the interpretation.
Maximal explaining attributes
Defines how many attributes are shown in the interpretation column of the result. The full information is always shown at the importance port.