多元线性回归 梯度下降法

兰色紫精灵 2016-01-08 04:59:23
在用梯度下降法解多元线性回归问题时得到代价函数值关于迭代次数的结果图为:

此时:alpha = 0.001;

无论我怎么减小学习速率,上图走势都不会变
比如改变alpha=0.01,图形变为:

再比如alpha=0.0001,图形变为:


请各位大神指点啊!!!!!!!!!我这是代码错了还是alpha选择有问题呢??????????、

这是我的代码:
第一部分:特征缩放函数(脚本命名为featureNormalize.m)
function [X_norm, mu, sigma] = featureNormalize(X)
format short
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
m = size(X_norm,1);
%求特征的平均值
%for i = 1:m
% mu(1) = mu(1) + X_norm(m,1);
% mu(2) = mu(2) + X_norm(m,2);
%end
%mu(1) = mu(1)/m;
%mu(2) = mu(2)/m;

mu = sum(X_norm)/m;&求特征平均值

%求特征的标准差
% for i = 1:m
% sigma(1) = sigma(1) + (X_norm(m,1) - mu(1))^2;
% sigma(2) = sigma(2) + (X_norm(m,2) - mu(2))*^2;
% end

%求特征标准差
X_norm(:,1) = X_norm(:,1) - mu(1);
X_norm(:,2) = X_norm(:,2) - mu(2);
sigma = sum(X_norm.^2);
sigma(1) = sqrt(sigma(1)/m);
sigma(2) = sqrt(sigma(2)/m);

%特征归一化
% for i = 1:m
% X_norm(m,1) = (X_norm(m,1)-mu(1))/sigma(1);
% X_norm(m,2) = (X_norm(m,2)-mu(2))/sigma(2);
% end

%特征归一化
X_norm(:,1) = X_norm(:,1)/sigma(1);
X_norm(:,2) = X_norm(:,2)/sigma(2);
X_norm

%


第二部分:求代价函数J(脚本命名为computeCostMulti.m)

function J = computeCostMulti(X, y, theta)
format short
m = length(y);

% J = ((X*theta-y)')*(X*theta-y)/(2*m);

% sum = 0;
% for i = 1:m
% sum = sum + (X(i,:)*theta - y(i))^2;
% end
% J = sum/(2*m);

X = X*theta - y;
J = sum(X.^2)/(2*m);

end


第三部分:梯度下降发实现(脚本命名为:gradientDescentMulti.m)

function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)

m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
n = size(X,2);
temp = zeros(n,1);
for iter = 1:num_iters
for j = 1:n
for i = 1:m
x = X(i,:);
temp(j) = temp(j) + (x*theta-y(i))*x(j);
end
end
theta = theta - temp* alpha/m;

J_history(iter) = computeCostMulti(X, y, theta);

end

end


第四部分:导入数据以及运行步骤(ex1.m)

clear ; close all; clc

fprintf('Loading data ...\n');

data = load('data.txt');
X = data(:, 1:2);
y = data(:, 3);
m = length(y);

fprintf('前10个训练数据:\n');
fprintf(' x = [%.0f %.0f], y = %.0f \n', [X(1:10,:) y(1:10,:)]');

fprintf('Program paused. Press enter to continue.\n');
pause;
fprintf('Normalizing Features ...\n');

[X_norm mu sigma] = featureNormalize(X);
X_norm = [ones(m, 1) X_norm];

fprintf('Running gradient descent ...\n');

alpha = 0.001;
num_iters = 400;

theta = zeros(3, 1);
J0 = computeCostMulti(X_norm,y,theta)
[theta, J_history] = gradientDescentMulti(X_norm, y, theta, alpha, num_iters);

figure;
plot(1:numel(J_history), J_history, '-', 'LineWidth', 2);
xlabel('Number of iterations');
ylabel('Cost J');

fprintf(' %f \n', theta);
fprintf('\n');




这是数据:data.txt
2104,3,399900
1600,3,329900
2400,3,369000
1416,2,232000
3000,4,539900
1985,4,299900
1534,3,314900
1427,3,198999
1380,3,212000
1494,3,242500
1940,4,239999
2000,3,347000
1890,3,329999
4478,5,699900
1268,3,259900
2300,4,449900
1320,2,299900
1236,3,199900
2609,4,499998
3031,4,599000
1767,3,252900
1888,2,255000
1604,3,242900
1962,4,259900
3890,3,573900
1100,3,249900
1458,3,464500
2526,3,469000
2200,3,475000
2637,3,299900
1839,2,349900
1000,1,169900
2040,4,314900
3137,3,579900
1811,4,285900
1437,3,249900
1239,3,229900
2132,4,345000
4215,4,549000
2162,4,287000
1664,2,368500
2238,3,329900
2567,4,314000
1200,3,299000
852,2,179900
1852,4,299900
1203,3,239500
...全文
416 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复

679

社区成员

发帖
与我相关
我的任务
社区描述
智能路由器通常具有独立的操作系统,包括OpenWRT、eCos、VxWorks等,可以由用户自行安装各种应用,实现网络和设备的智能化管理。
linuxpython 技术论坛(原bbs)
社区管理员
  • 智能路由器社区
加入社区
  • 近7日
  • 近30日
  • 至今
社区公告
暂无公告

试试用AI创作助手写篇文章吧