scikit-learn

时间:2024-10-20 07:56:51编辑:揭秘君

如何用python和scikit learn实现神经网络

1:神经网络算法简介2:Backpropagation算法详细介绍3:非线性转化方程举例4:自己实现神经网络算法NeuralNetwork5:基于NeuralNetwork的XOR实例6:基于NeuralNetwork的手写数字识别实例7:scikit-learn中BernoulliRBM使用实例8:scikit-learn中的手写数字识别实例一:神经网络算法简介1:背景以人脑神经网络为启发,历史上出现过很多版本,但最著名的是backpropagation2:多层向前神经网络(Multilayer Feed-Forward Neural Network)多层向前神经网络组成部分输入层(input layer),隐藏层(hiddenlayer),输出层(output layer)每层由单元(units)组成输入层(input layer)是由训练集的实例特征向量传入经过连接结点的权重(weight)传入下一层,一层的输出是下一层的输入隐藏层的个数是任意的,输出层和输入层只有一个每个单元(unit)也可以被称作神经结点,根据生物学来源定义上图称为2层的神经网络(输入层不算)一层中加权的求和,然后根据非线性的方程转化输出作为多层向前神经网络,理论上,如果有足够多的隐藏层(hidden layers)和足够大的训练集,可以模拟出任何方程3:设计神经网络结构3.1使用神经网络训练数据之前,必须确定神经网络层数,以及每层单元个数3.2特征向量在被传入输入层时通常被先标准化(normalize)和0和1之间(为了加强学习过程)3.3离散型变量可以被编码成每一个输入单元对应一个特征可能赋的值比如:特征值A可能取三个值(a0,a1,a2),可以使用三个输入单元来代表A如果A=a0,那么代表a0的单元值就取1,其他取0如果A=a1,那么代表a1的单元值就取1,其他取0,以此类推3.4神经网络即可以用来做分类(classification)问题,也可以解决回归(regression)问题3.4.1对于分类问题,如果是2类,可以用一个输入单元表示(0和1分别代表2类)如果多于两类,每一个类别用一个输出单元表示所以输入层的单元数量通常等于类别的数量 3.4.2没有明确的规则来设计最好有多少个隐藏层3.4.2.1根据实验测试和误差,以及准确度来实验并改进4:算法验证——交叉验证法(Cross- Validation)解读: 有一组输入集A,B,可以分成三组,第一次以第一组为训练集,求出一个准确度,第二次以第二组作为训练集,求出一个准确度,求出准确度,第三次以第三组作为训练集,求出一个准确度,然后对三个准确度求平均值二:Backpropagation算法详细介绍1:通过迭代性来处理训练集中的实例2:输入层输入数经过权重计算得到第一层的数据,第一层的数据作为第二层的输入,再次经过权重计算得到结果,结果和真实值之间是存在误差的,然后根据误差,反向的更新每两个连接之间的权重3:算法详细介绍输入:D : 数据集,| 学习率(learning rate),一个多层前向神经网络输出:一个训练好的神经网络(a trained neural network)3.1初始化权重(weights)和偏向(bias):随机初始化在-1到1之间,或者-0.5到0.5之间,每个单元有一个偏向3.2对于每一个训练实例X,执行以下步骤:3.2.1:由输入层向前传送,输入->输出对应的计算为:计算得到一个数据,经过f 函数转化作为下一层的输入,f函数为:3.2.2:根据误差(error)反向传送对于输出层(误差计算): Tj:真实值,Qj表示预测值对于隐藏层(误差计算): Errk 表示前一层的误差, Wjk表示前一层与当前点的连接权重权重更新: l:指学习比率(变化率),手工指定,优化方法是,随着数据的迭代逐渐减小偏向更新: l:同上3.3:终止条件3.3.1权重的更新低于某个阀值3.3.2预测的错误率低于某个阀值3.3.3达到预设一定的循环次数4:结合实例讲解算法0.9对用的是L,学习率测试代码如下:1.NeutralNetwork.py文件代码#coding:utf-8import numpy as np#定义双曲函数和他们的导数def tanh(x):return np.tanh(x)def tanh_deriv(x):return 1.0 - np.tanh(x)**2def logistic(x):return 1/(1 + np.exp(-x))def logistic_derivative(x):return logistic(x)*(1-logistic(x))#定义NeuralNetwork 神经网络算法class NeuralNetwork:#初始化,layes表示的是一个list,eg[10,10,3]表示第一层10个神经元,第二层10个神经元,第三层3个神经元def __init__(self, layers, activation='tanh'):""":param layers: A list containing the number of units in each layer.Should be at least two values:param activation: The activation function to be used. Can be"logistic" or "tanh""""if activation == 'logistic':self.activation = logisticself.activation_deriv = logistic_derivativeelif activation == 'tanh':self.activation = tanhself.activation_deriv = tanh_derivself.weights = []#循环从1开始,相当于以第二层为基准,进行权重的初始化for i in range(1, len(layers) - 1):#对当前神经节点的前驱赋值self.weights.append((2*np.random.random((layers[i - 1] + 1, layers[i] + 1))-1)*0.25)#对当前神经节点的后继赋值self.weights.append((2*np.random.random((layers[i] + 1, layers[i + 1]))-1)*0.25)#训练函数 ,X矩阵,每行是一个实例 ,y是每个实例对应的结果,learning_rate 学习率,# epochs,表示抽样的方法对神经网络进行更新的最大次数def fit(self, X, y, learning_rate=0.2, epochs=10000):X = np.atleast_2d(X) #确定X至少是二维的数据temp = np.ones([X.shape[0], X.shape[1]+1]) #初始化矩阵temp[:, 0:-1] = X # adding the bias unit to the input layerX = tempy = np.array(y) #把list转换成array的形式for k in range(epochs):#随机选取一行,对神经网络进行更新i = np.random.randint(X.shape[0])a = [X[i]]#完成所有正向的更新for l in range(len(self.weights)):a.append(self.activation(np.dot(a[l], self.weights[l])))#error = y[i] - a[-1]deltas = [error * self.activation_deriv(a[-1])]#开始反向计算误差,更新权重for l in range(len(a) - 2, 0, -1): # we need to begin at the second to last layerdeltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_deriv(a[l]))deltas.reverse()for i in range(len(self.weights)):layer = np.atleast_2d(a[i])delta = np.atleast_2d(deltas[i])self.weights[i] += learning_rate * layer.T.dot(delta)#预测函数def predict(self, x):x = np.array(x)temp = np.ones(x.shape[0]+1)temp[0:-1] = xa = tempfor l in range(0, len(self.weights)):a = self.activation(np.dot(a, self.weights[l]))return a2、测试代码#coding:utf-8'''#基于NeuralNetwork的XOR(异或)示例import numpy as npfrom NeuralNetwork import NeuralNetworknn = NeuralNetwork([2,2,1], 'tanh')X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])y = np.array([0, 1, 1, 0])nn.fit(X, y)for i in [[0, 0], [0, 1], [1, 0], [1,1]]:print(i,nn.predict(i))''''''#基于NeuralNetwork的手写数字识别示例import numpy as npfrom sklearn.datasets import load_digitsfrom sklearn.metrics import confusion_matrix,classification_reportfrom sklearn.preprocessing import LabelBinarizerfrom sklearn.cross_validation import train_test_splitfrom NeuralNetwork import NeuralNetworkdigits = load_digits()X = digits.datay = digits.targetX -= X.min()X /= X.max()nn =NeuralNetwork([64,100,10],'logistic')X_train, X_test, y_train, y_test = train_test_split(X, y)labels_train = LabelBinarizer().fit_transform(y_train)labels_test = LabelBinarizer().fit_transform(y_test)print "start fitting"nn.fit(X_train,labels_train,epochs=3000)predictions = []for i in range(X_test.shape[0]):o = nn.predict(X_test[i])predictions.append(np.argmax(o))print confusion_matrix(y_test, predictions)print classification_report(y_test, predictions)'''#scikit-learn中的手写数字识别实例import numpy as npimport matplotlib.pyplot as pltfrom scipy.ndimage import convolvefrom sklearn import linear_model, datasets, metricsfrom sklearn.cross_validation import train_test_splitfrom sklearn.neural_network import BernoulliRBMfrom sklearn.pipeline import Pipeline################################################################################ Setting updef nudge_dataset(X, Y):direction_vectors = [[[0, 1, 0],[0, 0, 0],[0, 0, 0]],[[0, 0, 0],[1, 0, 0],[0, 0, 0]],[[0, 0, 0],[0, 0, 1],[0, 0, 0]],[[0, 0, 0],[0, 0, 0],[0, 1, 0]]]shift = lambda x, w: convolve(x.reshape((8, 8)), mode='constant',weights=w).ravel()X = np.concatenate([X] +[np.apply_along_axis(shift, 1, X, vector)for vector in direction_vectors])Y = np.concatenate([Y for _ in range(5)], axis=0)return X, Y# Load Datadigits = datasets.load_digits()X = np.asarray(digits.data, 'float32')X, Y = nudge_dataset(X, digits.target)X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) # 0-1 scalingX_train, X_test, Y_train, Y_test = train_test_split(X, Y,test_size=0.2,random_state=0)# Models we will uselogistic = linear_model.LogisticRegression()rbm = BernoulliRBM(random_state=0, verbose=True)classifier = Pipeline(steps=[('rbm', rbm), ('logistic', logistic)])################################################################################ Training# Hyper-parameters. These were set by cross-validation,# using a GridSearchCV. Here we are not performing cross-validation to# save time.rbm.learning_rate = 0.06rbm.n_iter = 20# More components tend to give better prediction performance, but larger# fitting timerbm.n_components = 100logistic.C = 6000.0# Training RBM-Logistic Pipelineclassifier.fit(X_train, Y_train)# Training Logistic regressionlogistic_classifier = linear_model.LogisticRegression(C=100.0)logistic_classifier.fit(X_train, Y_train)################################################################################ Evaluationprint()print("Logistic regression using RBM features:\n%s\n" % (metrics.classification_report(Y_test,classifier.predict(X_test))))print("Logistic regression using raw pixel features:\n%s\n" % (metrics.classification_report(Y_test,logistic_classifier.predict(X_test))))################################################################################ Plottingplt.figure(figsize=(4.2, 4))for i, comp in enumerate(rbm.components_):plt.subplot(10, 10, i + 1)plt.imshow(comp.reshape((8, 8)), cmap=plt.cm.gray_r,interpolation='nearest')plt.xticks(())plt.yticks(())plt.suptitle('100 components extracted by RBM', fontsize=16)plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)plt.show()'''from sklearn.neural_network import BernoulliRBMX = [[0,0],[1,1]]y = [0,1]clf = BernoulliRBM().fit(X,y)print测试结果如下:

python scikit-learn 有什么算法

1,前言很久不发文章,主要是Copy别人的总感觉有些不爽,所以整理些干货,希望相互学习吧。不啰嗦,进入主题吧,本文主要时说的为朴素贝叶斯分类算法。与逻辑回归,决策树一样,是较为广泛使用的有监督分类算法,简单且易于理解(号称十大数据挖掘算法中最简单的算法)。但其在处理文本分类,邮件分类,拼写纠错,中文分词,统计机器翻译等自然语言处理范畴较为广泛使用,或许主要得益于基于概率理论,本文主要为小编从理论理解到实践的过程记录。2,公式推断一些贝叶斯定理预习知识:我们知道当事件A和事件B独立时,P(AB)=P(A)(B),但如果事件不独立,则P(AB)=P(A)P(B|A)。为两件事件同时发生时的一般公式,即无论事件A和B是否独立。当然也可以写成P(AB)=P(B)P(A|B),表示若要两件事同事发生,则需要事件B发生后,事件A也要发生。由上可知,P(A)P(B|A)= P(B)P(A|B)推出P(B|A)=其中P(B)为先验概率,P(B|A)为B的后验概率,P(A|B)为A的后验概率(在这里也为似然值),P(A)为A的先验概率(在这也为归一化常量)。由上推导可知,其实朴素贝叶斯法就是在贝叶斯定理基础上,加上特征条件独立假设,对特定输入的X(样本,包含N个特征),求出后验概率最大值时的类标签Y(如是否为垃圾邮件),理解起来比逻辑回归要简单多,有木有,这也是本算法优点之一,当然运行起来由于得益于特征独立假设,运行速度也更快。. 参数估计3,参数估计由上面推断出的公式,我们知道其实朴素贝叶斯方法的学习就是对概率P(Y=ck)和P(X(j)=x(j)|Y=ck)的估计。我们可以用极大似然估计法估计上述先验概率和条件概率。其中I(x)为指示函数,若括号内成立,则计1,否则为0。李航的课本直接给出了用极大似然(MLE)估计求出的结果,并没给推导过程,我们知道,贝叶斯较为常见的问题为0概率问题。为此,需要平滑处理,主要使用拉普拉斯平滑,如下所示:K是类的个数,Sj是第j维特征的最大取值。实际上平滑因子λ=0即为最大似然估计,这时会出现提到的0概率问题;而λ=1则避免了0概率问题,这种方法被称为拉普拉斯平滑。4,算法流程5,朴素贝叶斯算法优缺点优点:朴素贝叶斯模型发源于古典数学理论,有着坚实的数学基础,以及稳定的分类效率需调参较少,简单高效,尤其是在文本分类/垃圾文本过滤/情感判别等自然语言处理有广泛应用。在样本量较少情况下,也能获得较好效果,计算复杂度较小,即使在多分类问题。无论是类别类输入还是数值型输入(默认符合正态分布)都有相应模型可以运用。缺点:0概率问题,需要平滑处理,通常为拉普拉斯平滑,但加一平滑不一定为效果最好,朴素贝叶斯有分布独立的假设前提,生活中较少完全独立,在属性个数比较多或者属性之间相关性较大时,NBC模型的分类效率比不上决策树模型。而在属性相关性较小时,NBC模型的性能最为良好。模型注意点:1, 大家也知道,很多特征是连续数值型的,一般选择使用朴素贝叶斯高斯模型。2, 为避免0概率事件,记得平滑,简单一点可以用『拉普拉斯平滑』。先处理处理特征,把相关特征去掉,3, 朴素贝叶斯分类器一般可调参数比较少,需集中精力进行数据的预处理等特征工程工作。6,Scikit-learn三大朴素贝叶斯模型Scikit-learn里面有3种不同类型的朴素贝叶斯(:1, 高斯分布型模型:用于classification问题,假定属性/特征是服从正态分布的,一般用在数值型特征。,2, 多项式型模型:用于离散值模型里。比如文本分类问题里面我们提到过,我们不光看词语是否在文本中出现,也得看出现的次数。如果总词数为n,出现词数为m的话,说起来有点像掷骰子n次出现m次这个词的场景。3, 伯努利模型:这种情况下,就如提到的bag ofwords处理方式一样,最后得到的特征只有0(没出现)和1(出现过)。7. Scikit-learn算法实践小编通过实现朴素贝叶斯三种模型以及主要分类算法,对比发现跟SVM,随机森林,融合算法相比,贝叶斯差距明显,但其时间消耗要远低于上述算法,以下为主要算法主要评估指标)。8. Python代码# -*-coding: utf-8 -*-importtimefromsklearn import metricsfromsklearn.naive_bayes import GaussianNBfromsklearn.naive_bayes import MultinomialNBfromsklearn.naive_bayes import BernoulliNBfromsklearn.neighbors import KNeighborsClassifierfromsklearn.linear_model import LogisticRegressionfromsklearn.ensemble import RandomForestClassifierfromsklearn import treefromsklearn.ensemble import GradientBoostingClassifierfromsklearn.svm import SVCimportnumpy as npimporturllib# urlwith dataseturl ="-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"#download the fileraw_data= urllib.request.urlopen(url)#load the CSV file as a numpy matrixdataset= np.loadtxt(raw_data, delimiter=",")#separate the data from the target attributesX =dataset[:,0:7]#X=preprocessing.MinMaxScaler().fit_transform(x)#print(X)y =dataset[:,8]print("\n调用scikit的朴素贝叶斯算法包GaussianNB ")model= GaussianNB()start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的朴素贝叶斯算法包MultinomialNB ")model= MultinomialNB(alpha=1)start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的朴素贝叶斯算法包BernoulliNB ")model= BernoulliNB(alpha=1,binarize=0.0)start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的KNeighborsClassifier ")model= KNeighborsClassifier()start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的LogisticRegression(penalty='l2') ")model= LogisticRegression(penalty='l2')start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的RandomForestClassifier(n_estimators=8) ")model= RandomForestClassifier(n_estimators=8)start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的tree.DecisionTreeClassifier() ")model= tree.DecisionTreeClassifier()start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的GradientBoostingClassifier(n_estimators=200) ")model= GradientBoostingClassifier(n_estimators=200) start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))print("\n调用scikit的SVC(kernel='rbf', probability=True) ")model= SVC(kernel='rbf', probability=True)start_time= time.time()model.fit(X,y)print('training took %fs!' % (time.time() - start_time))print(model)expected= ypredicted= model.predict(X)print(metrics.classification_report(expected,predicted))print(metrics.confusion_matrix(expected,predicted))"""# 预处理代码集锦importpandas as pddf=pd.DataFrame(dataset)print(df.head(3))print(df.describe())##描述性分析print(df.corr())##各特征相关性分析##计算每行每列数据的缺失值个数defnum_missing(x):return sum(x.isnull())print("Missing values per column:")print(df.apply(num_missing, axis=0)) #axis=0代表函数应用于每一列print("\nMissing values per row:")print(df.apply(num_missing, axis=1).head()) #axis=1代表函数应用于每一行"""

上一篇:betterman

下一篇:没有了