Dowemo
0 0 0 0

 线性回归(Linear Regression) 是机器学习的一个很基础很简单的模型了,但它的地位却非常重要,很多时候常用于预测、分类等任务中。先放一张图片来直观地感受一下线性回归。(事实上线性回归学习出来的模型不一定是直线,只要变量是1维的时候才是直线,高维的时候是超平面)。

这里写图片描述

H ( θ1x1 θ2x2 ) = + +. + θdxd + b.
H ( θ1, θ2. θd ), = + b, where. Is. After the and b learned, the model was determined. In order to further simplify, we can eventually get h ( ) =. T, x, x,.
To determine theta, the key is to measure the difference between h ( x ) and y. Mean square error is the most commonly used performance measure in regression task, so it can be tried to minimize the mean square error. Then, the method based on the. Therefore, the process of minimizing the mean square error is also known as the least square"parameter estimation".
J(Theta.)=12mi=1(hTheta.(x(i)y(i))2
The x ( I ) in the formula is the estimated I, y ( I ) is actually the value of the first I.
p((i))=12Pi piSquare squareσexp(((i))22σ2)
It can be written as follows: p(y(i)x(i);Theta.)=12Pi piSquare squareσexp((y(i)Theta.Tx(i))22σ2)
Finally, the likelihood function is obtained. L(Theta.)=mi=1p(y(i)|x(i);Theta.)=mi=112Pi piSquare squareσexp((y(i)Theta.Tx(i))22σ2)
In order to simplify the calculation, we take the logarithm of the previous formula, and the logarithm likelihood function is: l(Theta.)=lnL(Theta.)=mi=1ln12Pi piSquare squareσexp((y(i)Theta.Tx(i))22σ2)=mln12Pi piSquare squareσ12σ2mi=1(y(i)Theta.Tx(i))2
The best of the requirements, that's, make the error the smallest of the resulting error, which is the requirement to make the minimum time of the l ( theta ) function. In the process, we can calculate the gradient ( derivative ): Theta.J(Theta.)=Theta.12(XTheta.y)T(XTheta.y)=12Theta.(Theta.TXTXTheta.Theta.TXTyyTXTheta.+yTy)=12(XTXTheta.+XTXTheta.2XTy)=XTXTheta.XTy
Make the preceding formula 0: Theta.=(XTX)1XTy
If xtx isn't reversible, then: Theta.=(XTX+λI)TXTy
The results obtained are global optimal. When xtx isn't reversible, why do I add an identity matrix here? Let xtx be, and for any vector u, there are: Utxtxu ( utxt ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( = ),. In addition, if xtx number is too high, gradient descent. Then, look at the gradient descent algorithm:

  • Initialize theta ( random initialization )
  • Iteration, new ability to make j ( theta ) smaller.
  • If j ( theta ) can continue to decrease, return step 2 and repeat until theta convergence. The basic formula is: Theta.j: =Theta.jAlpha alphaTheta.jJ(Theta.)
    In the upper type, the learning rate or step; Continue to solve: Theta.jJ(Theta.)=Theta.j12(hTheta.(x)y)2=212(hTheta.(x)y)2Theta.j(hTheta.(x)y)=(hTheta.(x))Theta.j(ni=0Theta.ixiy)=(hTheta.(x)y)xj
    In fact, this is also called batch gradient descent ( batch gradient descent ), which can be more intuitive to look at. 这里写图片描述
    In fact, it's the best solution through bgd, but at the same time the disadvantage is that the calculation cost is large, the time is long, after all bgd. And sgd is more commonly used. Sgd randomly choose a sample from the training set, that's: Theta.j: =Theta.j+Alpha alpha(y(i)hTheta.(x(i))x(i)j
    Every time bgd uses all training samples, these calculations are redundant because they use identical sample sets every time the same sample sets are used. Sgd randomly chooses a sample to update the model parameters, so each learning is very fast and can be updated online and online updates. As a result of sgd updates through each sample, if the sample is very large, it can only be used thousands of thousands of samples to achieve the best solution. But the problem is that the noise is much more than bgd, so it isn't the direction of each iteration to the optimal direction, but efficiency increases. It isn't necessarily optimal solution, but it must be near the optimal solution. One of the following plans is an intuitive description of sgd:
    这里写图片描述
    After a tradeoff between the two algorithms of bgd and sgd, the mini batch gradient descent algorithm isn't the same. It isn't every sample to change the. What's the concrete process of the gradient descent. See below:
  • Xk = a, move along the negative staircase, move to beam + 1 = b, with: b=aAlpha alphaF(a)f(a)>f(b)
  • From the point of view of the x0, each time the current function gradient is moved to a certain distance k, the sequence is obtained: x0,x1,...,xn
  • The relation between each function value sequence column is: f(x0)Equal tof(x1)Equal tof(x2)Equal to...Equal tof(xn)
  • When n reaches a certain value, the function f ( x ) convergence to the local minimum, because the function itself is a function, the local minimum is the global minimum.

    The above equation will be used by alpha, which is explained in front of it, refers to a learning rate or a step, so how to determine and optimize it. Several methods will be described below. A very obvious point of the first point is to use a small learning rate with a small learning rate in place where the direction derivative is large, and it's. Then how to solve the more appropriate learning rate. Fi & t, note the current point of view, and the current search direction is dk ( e. G.: Because the learning rate is the object to be examined, the f ( beam + αdk ) of the function is considered as a function of the function of the function. That's: h(Alpha alpha)=f(xk+Alpha alphadk),Alpha alpha>0
    The derivative is: h(Alpha alpha)=f(xk+Alpha alphadk)Tdk
    The calculation standard is: Because the gradient descent is the minimum value to find the ( x ), the minimum value for finding f ( beam + αdk ) is found under the premise of th Alpha alpha=argmin(Alpha alpha>0)h(Alpha alpha)=argmin(Alpha alpha>0)f(xk+Alpha alphadk)
    If h ( alpha ) can be guide, the local minimum is satisfied: h""""(Alpha alpha)=f(xk+Alpha alphadk)T
    If the learning rate is optimal, it's bound to be equal to 0: h""""(Alpha alpha)=f(xk+Alpha alphadk)Tdk=0
    Here's an analysis of the derivative function derivative:

  • Add = 0 to: h""""(0)=f(xk+0dk)Tdk=f(xk)Tdk

  • The drop direction dk chooses negative staircase to:
  • In this case, <'( 0 ) 0;
  • If you can find enough large enough to make h '( alpha )> 0, there must be some, make h '( alpha ) =, alpha 's is the learning rate to find.

    It's a search ( bisection line search ) to solve the simplest method. This method is similar to that of solving the solution of the solution. search is continuously divided the interval [ α1, α2 ] into half, select the end of the, until the interval is small enough or find.
    In addition, a method called backtracking linear search ( backing line search ). This is based on armijo criteria ( what's the jcr criterion )? See the explanation of this blog, which is very important: The maximum length ( learning rate ) in the search direction of is explained by. Its basic idea is to move a larger step estimate along the search direction, and. The calculation expression is: f(xk+Alpha alphadk)Less thanf(xk)+c1Alpha alphaf(xk)Tdk,c1(0,1)
    What's the similarities and differences between search and search. search is an optimal approximation to the of h '( alpha ). And the backtracking linear search relaxed the constraint, so as long as the function value is large enough to change. The former can reduce the number of drops, but the cost of calculation is large; The latter is a.
    Next, how to use code to implement linear regression models. The data set in this is a simple data set, as follows:

    这里写图片描述

We use python to implement the batch gradient descent method.

import pandas as pdimport numpy as np#读取数据集并分割数据集data_s = pd.DataFrame(pd.read_csv('E:ex0.csv'))
x = np.array(data_s['X'])
y = np.array(data_s['Y'])
threshold = 0.1#迭代阈值,当两次迭代函数之差小于该阈值便停止迭代alpha = 0.98#学习率loop_max = 10000#最大迭代次数,防止死循环#初始化参数数theta_0 = 0theta_1 = 0time_lteration = 0#初始化迭代次数为0m = len(x)
diff = 1#初始化残差error1 = 0#初始化损失error0 = 0#初始化损失while time_lteration <loop_max:
 time_lteration += 1 k = 0 l = 0for i in range(m):
 h = theta_0 + theta_1 * x[i]
 k += (y[i] - h) * x[i]
 l += (y[i] - h)
 theta_1 += alpha * k/m
 theta_0 += alpha * l/m
 for i in range(m):
 error1 += (y[i] - (theta_0 + theta_1 * x[i])) ** 2/2if abs(error1 - error0) <threshold:
 breakelse:
 error0 = error1
 print("theta_0: %f, theta_1: %f, error1: %f" % (theta_0, theta_1, error1))
print('Done: theta_0: %f, theta_1: %f' % (theta_0, theta_1))
print(time_lteration)

The run results are:
这里写图片描述

Then compare the plot to the scatter chart drawn out of the original data set, as follows:
这里写图片描述

The effect of the visible fitting is also good, which is the batch gradient descent algorithm, and the data set isn't very large, so it can get the result quickly. The python code is as follows:

import pandas as pdimport numpy as np#读取数据集并分割数据集data_s = pd.DataFrame(pd.read_csv('E:ex0.csv'))
x = np.array(data_s['X'])
y = np.array(data_s['Y'])
threshold = 0.00001#迭代阈值,当两次迭代函数之差小于该阈值便停止迭代alpha = 0.002#学习率loop_max = 10000#最大循环次数,防止死循环#初始化参数数theta_0 = 0theta_1 = 0time_lteration = 0#初始化迭代次数为0m = len(x)
diff = [0] #初始化残差error1 = 0#初始化损失error0 = 0#初始化损失while time_lteration <loop_max:
 time_lteration += 1#参数迭代计算for i in range(m):#拟合函数为:y = theta_0 * x_0 + theta_1 * x 其中x_0 = 1; diff[0] = (theta_0 + theta_1 * x[i]) - y[i] #计算残差 theta_0 -= alpha * diff[0]
 theta_1 -= alpha * diff[0] * x[i]
 error1 = 0for n in range(len(x)):
 error1 += (y[n] - (theta_0 + theta_1 * x[n]))**2/2#最小二乘法if abs(error1 - error0) <threshold:
 breakelse:
 error0 = error1
 print ("theta_0: %f, theta_1: %f, error1: %f" %(theta_0, theta_1, error1))print ('Done: theta_0: %f, theta_1: %f' %(theta_0, theta_1))print (time_lteration)

The results are as follows:
这里写图片描述

Then compare the plot to the scatter chart drawn out of the original data set, as follows:

这里写图片描述

It's still good to compare the two results, although the volume gradient descent method above is a little problem, but the whole is still good. Now the algorithm is implemented, then use this algorithm to predict the age of. A screenshot of the age of the fish is as follows:
这里写图片描述
Then add the algorithm that was previously implemented. Because there may be an offset in linear regression, a constant 1 is added to the dataset. The source code for linear regression is as follows:

import pandas as pdfrom numpy import *import numpy as np'''
拟合函数可以表示为:y =theta_1 + theta_2 * X_1 + theta_3 * X_2 + theta_4 * X_3 + theta_5 * X_4 + theta_6 * X_5 + theta_7 * X_6 + theta_8 * X_7 + theta_9 * X_8
平方误差可表示为: 实际值减去预测值的平方和, 由于当误差为0时,theta可以取得最优解,故对误差公式求导化简可得theta的最优解
 等于X的转置矩阵乘以X求逆然后再乘以X的转置矩阵最后乘以y,所以算法的实现如下:
'''#读取数据集data_s = pd.DataFrame(pd.read_csv('E:abalone.csv'))#设置目标值和特征值X = np.array((data_s[['X_1', 'X_2', 'X_3', 'X_4', 'X_5', 'X_6', 'X_7', 'X_8', 'X_9']])) #其中X_1是添加上去的1,用来计算偏移量Y = np.array((data_s['Y']))#建立计算theta的函数defSoluteCoefficient(X, Y): X_Mat = np.mat(X) #把数据类型转化为矩阵 Y_Mat = np.mat(Y).T #把数据类型转化为矩阵并转置 xTx = X_Mat.T * X_Mat
 if linalg.det(xTx) == 0.0: #检测行列式是否为0print ("Loooooooose!!!!!!")
 returnelse:
 thetas = xTx.I * (X_Mat.T * Y_Mat) #计算并获取thetareturn thetas
thetas = SoluteCoefficient(X, Y).T

After running, use the method of coefficient of correlation provided by numpy library: By corrcoef ( yestimate, yactual ), the correlation between predicted values and real values is calculated. The results are as follows:
这里写图片描述
The mean of this matrix is the correlation coefficient of each pair, and the diagonal of 1 is the correlation coefficient of y itself and itself, so it's And the results obtained from the calculated thetas and x are very low, which is very low, that's, the phenomenon appeared, so the. A lwlr refers to each point to be predicted to give a certain weight, then each subset is based on the least mean variance for normal regression, which needs to. The main process of its algorithm is as follows

  • Output θtx

Output θtx ( this is the local weighting obtained )

For the settings of w, typically use gaussian kernel functio: wi=exp((x(i)x)22τ2)
It's the bandwidth, which controls the attenuation rate of the training samples with x ( I ) distance. The choice is very important, if it's too small, it may cause, too large may result in fitting, so need to depend on actual situation. The following image shows the fitting effect of local weighted linear regression:
这里写图片描述
Here's a code to implement a local linear regression algorithm:

import pandas as pdfrom numpy import *import numpy as npdefLWLR(points, x_array, y_array, k = 1.0): x_mat = mat(x_array)
 y_mat = mat(y_array).T
 h = shape(x_mat) [0] #查看矩阵或者数组的维数 weights = mat(eye((h))) #建立以h为维度的单位矩阵for i in range(h):
 qu_mat = points - x_mat[i, :]
 weights[i, i] = exp(qu_mat * qu_mat.T/(-2.0 * k ** 2)) #高斯核函数 xTx = x_mat.T * (weights * x_mat)
 if linalg.det(xTx) == 0.0: #检测行列式是否为0print ('looooose!!!')
 returnelse:
 thetas = xTx.I * (x_mat.T * (weights * y_mat))
 return points * thetas

Put the dataset in:

import pandas as pdfrom numpy import *import numpy as npdefLWLR(points, x_array, y_array, k = 1.0): x_mat = mat(x_array)
 y_mat = mat(y_array).T
 h = shape(x_mat) [0] #查看矩阵或者数组的维数 weights = mat(eye((h))) #建立以h为维度的单位矩阵for i in range(h):
 qu_mat = points - x_mat[i, :]
 weights[i, i] = exp(qu_mat * qu_mat.T/(-2.0 * k ** 2)) #高斯核函数 xTx = x_mat.T * (weights * x_mat)
 if linalg.det(xTx) == 0.0:
 print ('looooose!!!')
 returnelse:
 thetas = xTx.I * (x_mat.T * (weights * y_mat))
 return points * thetasdefLWLR_test(test_array, x_array, y_array, k = 10.000):#用于为数据集中的每个元素调用LWLR函数,计算每个样本点对应的权重值 n = shape(test_array) [0]
 y_Cal = zeros(n) #生成全零数组for j in range(n):
 y_Cal[j] = LWLR(test_array[j], x_array, y_array, k)
 return y_Caldefwucha(Y, Y_jia):#计算误差return sum((Y - Y_jia) ** 2)#读取数据集data_s = pd.DataFrame(pd.read_csv('E:abalone.csv'))#设置目标值和特征值X = np.array((data_s[['X_1', 'X_2', 'X_3', 'X_4', 'X_5', 'X_6', 'X_7', 'X_8']]))
Y = np.array((data_s['Y']))
Y_jia = LWLR_test(X[100:199], X[0:99], Y[0:99], 10)
a = wucha(Y[100:199], Y_jia.T)print (a)

Finally, the results obtained are as follows:
这里写图片描述
The visible operation is still apparent.
Linear regression is a basic but important part of regression idea, but only one part of it, there are logistic regression, logarithmic linear model, etc.




Copyright © 2011 Dowemo All rights reserved.    Creative Commons   AboutUs