Python RuntimeWarning:在square中遇到溢出
我是机器学习和numpy的新手,我一直在尝试从sklearn在Boston housing数据集上运行gradient descent。我的实现用于小型随机数据集,但在Boston数据集中,它产生了这些警告Python RuntimeWarning:在square中遇到溢出,python,numpy,machine-learning,linear-regression,Python,Numpy,Machine Learning,Linear Regression,我是机器学习和numpy的新手,我一直在尝试从sklearn在Boston housing数据集上运行gradient descent。我的实现用于小型随机数据集,但在Boston数据集中,它产生了这些警告 <string>:12: RuntimeWarning: overflow encountered in square <string>:15: RuntimeWarning: invalid value encountered in subtract 这是我的梯度
<string>:12: RuntimeWarning: overflow encountered in square
<string>:15: RuntimeWarning: invalid value encountered in subtract
这是我的梯度下降代码
import numpy as np
from sklearn.datasets import load_boston
from matplotlib import pyplot as plt
def gradient_descent(x,y,alpha,theta):
m=y.shape[0]
xtranspose = x.transpose()
i=0
cost =488
while cost>0.5:
hyp = np.dot(x, theta)
loss = hyp - y
cost = np.sum(loss ** 2)/(2*m)
plt.scatter(i,cost)
gradient = np.dot(xtranspose, loss)/m
theta = theta - alpha * gradient
i=i+1
plt.show()
return theta
dataset = load_boston()
m,n = dataset['data'].shape
x = np.ones((m,n+1))
x[:,:-1] = dataset['data']
y= dataset['target']
alpha=0.005
theta=np.ones(x.shape[1])
theta = gradient_descent(x,y,alpha,theta)
当输入为整数时,出现此类溢出警告并不罕见。第一件要尝试的事情是将它们作为一个浮球来投掷。如果loss是一个数组,那么可以使用loss=np.array(loss,dtype=float)。如果loss是整数,则可以使用loss=float(loss)。当输入为整数时,通常会看到此类溢出警告。第一件要尝试的事情是将它们作为一个浮球来投掷。如果loss是一个数组,那么可以使用loss=np.array(loss,dtype=float)。如果损失是一个整数,您可以使用loss=float(loss)。我想需要一个更好的初始猜测<代码>丢失**2溢出,并导致后续问题。我有完全相同的错误,知道如何修复吗???我想需要更好的初始猜测<代码>丢失**2溢出,并导致后续问题。我有完全相同的错误,知道如何修复吗???
import numpy as np
from sklearn.datasets import load_boston
from matplotlib import pyplot as plt
def gradient_descent(x,y,alpha,theta):
m=y.shape[0]
xtranspose = x.transpose()
i=0
cost =488
while cost>0.5:
hyp = np.dot(x, theta)
loss = hyp - y
cost = np.sum(loss ** 2)/(2*m)
plt.scatter(i,cost)
gradient = np.dot(xtranspose, loss)/m
theta = theta - alpha * gradient
i=i+1
plt.show()
return theta
dataset = load_boston()
m,n = dataset['data'].shape
x = np.ones((m,n+1))
x[:,:-1] = dataset['data']
y= dataset['target']
alpha=0.005
theta=np.ones(x.shape[1])
theta = gradient_descent(x,y,alpha,theta)