predict_proba返回的是一个n行k列的数组,第i行第j列上的数值是模型预测第i个预测样本的标签为j的概率。所以每一行的和应该等于1.
举个例子
>>> from sklearn.linear_model import LogisticRegression
>>> import numpy as np
>>> x_train = np.array([[1,2,3],
[1,3,4],
[2,1,2],
[4,5,6],
[3,5,3],
[1,7,2]])
>>> y_train = np.array([0, 0, 0, 1, 1, 1])
>>> x_test = np.array([[2,2,2],
[3,2,6],
[1,7,4]])
>>> clf = LogisticRegression()
>>> clf.fit(x_train, y_train)
# 返回预测标签
>>> clf.predict(x_test)
array([1, 0, 1])
# 返回预测属于某标签的概率
>>> clf.predict_proba(x_test)
array([[ 0.43348191, 0.56651809],
[ 0.84401838, 0.15598162],
[ 0.13147498, 0.86852502]])
# 比如说,该模型
# 预测[2,2,2]的标签是0的概率为0.43348191,1的概率为0.56651809
# 预测[3,2,6]的标签是0的概率为0.84401838,1的概率为0.15598162
# 预测[1,7,4]的标签是0的概率为0.13147498,1的概率为0.86852502
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]]) #数据特征
>>> y = np.array([1, 1, 2, 2]) # 数据对应的标签
>>> from sklearn.svm import SVC # 导入svm的svc类(支持向量分类)
>>> clf = SVC() # 创建分类器对象
>>> clf.fit(X, y) # 用训练数据拟合分类器模型
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
>>> clf.predict([[-0.8, -1]]) # 用训练好的分类器去预测[-0.8, -1]数据的标签
[1]
来源: http://www.taocms.org/1096.html