Python 学习速度对收敛速度没有影响?
在使用Rosenblatt的perceptron实现与非门的行为时,我正在测试改变学习速率的影响以及它实际需要收敛多少个时代。但是改变值没有影响,算法总是在5个阶段后收敛 这是我的实现:Python 学习速度对收敛速度没有影响?,python,numpy,machine-learning,neural-network,perceptron,Python,Numpy,Machine Learning,Neural Network,Perceptron,在使用Rosenblatt的perceptron实现与非门的行为时,我正在测试改变学习速率的影响以及它实际需要收敛多少个时代。但是改变值没有影响,算法总是在5个阶段后收敛 这是我的实现: import numpy as np import matplotlib.pyplot as plt n_inputs = 2 epochs = 10 # the loop is written to break once the algorithm converged so this represents
import numpy as np
import matplotlib.pyplot as plt
n_inputs = 2
epochs = 10 # the loop is written to break once the algorithm converged so this represents max epochs
#binary inputs
training_inputs = np.array([[1,0,0],[1,0,1],[1,1,0],[1,1,1]])
#bipolar inputs
#training_inputs = np.array([[1,-1,-1],[1,-1,1],[1,1,-1],[1,1,1]])
d = np.array([1,1,1,-1])
l_rate = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
for learning_rate in l_rate:
print('\n====================================\n')
print('Learning Rate: ',learning_rate)
w = np.zeros(n_inputs+1)
previous_w = np.zeros(n_inputs+1)
conv_epoch = 0
for i in range(epochs):
for inputs, targets in zip(training_inputs, d):
v_i = np.dot(inputs,w)
if v_i > 0:
y_i = 1
elif v_i == 0:
y_i = 0
elif v_i < 0:
y_i = -1
w = w + learning_rate*(targets - y_i)*inputs
#logging the weights after each epoch and comparing it to the previous epoch's weights
print('------------------------')
print('end of epoch: ',i+1)
print('weights: ', w)
print('previous weights: ', previous_w)
conv = np.array_equal(previous_w, w) #compare to previous epoch's weights to check for convergence
print('Converged: ',conv)
if(conv):
if (conv_epoch==0):
conv_epoch = i #it converged at the previous epoch
print('weights: ', w)
print('Converged at epoch: ',conv_epoch)
print('------------------------')
break #after it converged no need to continue with the remaining epochs
print('------------------------')
print()
previous_w = w
将numpy导入为np
将matplotlib.pyplot作为plt导入
n_输入=2
epochs=10#一旦算法收敛,循环将被写入中断,因此这表示最大epochs
#二进制输入
训练输入=np.数组([[1,0,0],[1,0,1],[1,1,0],[1,1,1])
#双极输入
#训练输入=np.数组([[1,-1,-1],[1,-1,1],[1,1,-1],[1,1,1])
d=np.数组([1,1,1,-1])
l_比率=[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
对于l_rate中的学习率:
打印('\n==========================================================\n')
打印('学习率:',学习率)
w=np.零(n_输入+1)
上一个w=np.零(n个输入+1)
conv_epoch=0
对于范围内的i(历元):
对于输入,zip中的目标(培训输入,d):
v_i=np.dot(输入,w)
如果v_i>0:
y_i=1
elif v_i==0:
y_i=0
elif v_i<0:
y_i=-1
w=w+学习率*(目标-y*i)*输入
#记录每个历元后的权重,并将其与前一历元的权重进行比较
打印(“---------------------------”)
打印('时代结束:',i+1)
打印('重量:',w)
打印(“以前的权重:”,以前的w)
conv=np.array_equal(上一个_w,w)#与上一个历元的权重进行比较以检查收敛性
打印('聚合:',conv)
如果(conv):
如果(conv_epoch==0):
conv_epoch=i#它在前一个时期收敛
打印('重量:',w)
打印('在纪元处聚合:',转换纪元)
打印(“---------------------------”)
break#融合后无需继续其他时代
打印(“---------------------------”)
打印()
先前的w=w
这是否受到输入为-1和1的性质的影响,或者我的代码是否有任何错误。我正在用二进制和双极输入测试它