【Datawhale可解释性机器学习笔记】CAM
## 绍
LIME算法是Marco Tulio Ribeiro2016年发表的论文《“Why Should I Trust You?” Explaining the Predictions of Any Classifier》中介绍的局部可解释性模型算法。该算法主要是用在文本类与图像类的模型中。
论文地址
[Why Should I Trust You?” Explaining the Predictions of Any Classifier](https://arxiv.org/abs/1602.04938)
## 基本特征
* 可解释性
* 局部保真度
* 与模型无关
## 算法优缺点
1. LIME算法有很强的通用性,效果好。
2. LIME算法速度慢
3. LIME算法拓展方向
## 代码示例
```python
import lime
import sklearn
import numpy as np
import sklearn.ensemble
import sklearn.metrics
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_20newsgroups
#读取数据
categories = ['alt.atheism', 'soc.religion.christian']
newsgroups_train = fetch_20newsgroups(subset='train', categories=categories)
newsgroups_test = fetch_20newsgroups(subset='test', categories=categories)
class_names = ['atheism', 'christian']
#利用GBDT分类模型区分是否违约
from sklearn.ensemble import GradientBoostingClassifier
x =data.iloc[:,:8].as_matrix()
y = data.iloc[:,8].as_matrix()
gbdt = GradientBoostingClassifier()
gbdt = gbdt.fit(x,y)
#直接将训练数据作为预测数据
pred = gbdt.score(x,y)
#中文字体显示
plt.rc('font', family='SimHei', size=13)
from lime.lime_tabular import LimeTabularExplainer
#建立解释器
explainer = LimeTabularExplainer(x, feature_names=feature_names, class_names=class_names)
#解释第81个样本的规则
exp = explainer.explain_instance(x[81], gbdt.predict_proba)
#画图
fig = exp.as_pyplot_figure()
#画分析图
exp.show_in_notebook(show_table=True, show_all=False)
```
- 点赞
- 收藏
- 关注作者
评论(0)