Python 另一个S形回归方程问题

Python 另一个S形回归方程问题,python,statistics,numpy,scipy,scientific-computing,Python,Statistics,Numpy,Scipy,Scientific Computing,我发布了,但我似乎无法将此版本添加到该帖子中,因为似乎有人关闭了该帖子进行编辑,所以这里是新帖子中的新版本 我有下面的脚本,它执行以下操作: 1.)根据S形数据绘制最佳拟合曲线。 2.)根据x和y的新最大和最小坐标重新调整数据大小。 3.)为调整大小的数据计算并绘制新的最佳拟合曲线 第1步和第2步似乎很好,但第3步不行。如果运行该脚本,您将看到它为重新调整大小的数据绘制了一条完全无效的曲线 有人能告诉我如何修改下面的代码,以便为重新调整大小的数据创建和绘制真正的最佳拟合S形曲线吗?当在可能的最大

我发布了,但我似乎无法将此版本添加到该帖子中,因为似乎有人关闭了该帖子进行编辑,所以这里是新帖子中的新版本

我有下面的脚本,它执行以下操作:
1.)根据S形数据绘制最佳拟合曲线。
2.)根据x和y的新最大和最小坐标重新调整数据大小。
3.)为调整大小的数据计算并绘制新的最佳拟合曲线

第1步和第2步似乎很好,但第3步不行。如果运行该脚本,您将看到它为重新调整大小的数据绘制了一条完全无效的曲线

有人能告诉我如何修改下面的代码,以便为重新调整大小的数据创建和绘制真正的最佳拟合S形曲线吗?当在可能的最大值和最小值范围内重新调整大小时,这需要是可再现的

我似乎能够将问题跟踪到New_p,这在以下代码行中定义:

New_p, New_cov, New_infodict, New_mesg, New_ier = scipy.optimize.leastsq( 
    residuals,New_p_guess,args=(NewX,NewY),full_output=1,warning=True)   
但我不知道如何更深入地研究这个问题。我认为问题可能与全局变量和局部变量之间的差异有关,但可能是其他原因

以下是我的完整代码的当前草稿:

import numpy as np 
import matplotlib.pyplot as plt 
import scipy.optimize 

def GetMinRR(age):
    MaxHR = 208-(0.7*age)
    MinRR = (60/MaxHR)*1000
    return MinRR

def sigmoid(p,x):
    x0,y0,c,k=p 
    y = c / (1 + np.exp(-k*(x-x0))) + y0 
    return y 

def residuals(p,x,y): 
    return y - sigmoid(p,x) 

def resize(x,y,xmin=0.0,xmax=1.0,ymin=0.0,ymax=1.0):
    # Create local variables
    NewX = [t for t in x]
    NewY = [t for t in y]
    # If the mins are greater than the maxs, then flip them.
    if xmin>xmax: xmin,xmax=xmax,xmin 
    if ymin>ymax: ymin,ymax=ymax,ymin
    #----------------------------------------------------------------------------------------------    
    # The rest of the code below re-calculates all the values in x and then in y with these steps:
    #       1.) Subtract the actual minimum of the input x-vector from each value of x
    #       2.) Multiply each resulting value of x by the result of dividing the difference
    #           between the new xmin and xmax by the actual maximum of the input x-vector
    #       3.) Add the new minimum to each value of x
    # Note: I wrote in x-notation, but the identical process is also repeated for y
    #----------------------------------------------------------------------------------------------    
    # Subtracts right operand from the left operand and assigns the result to the left operand.
    # Note: c -= a is equivalent to c = c - a
    NewX -= x.min()

    # Multiplies right operand with the left operand and assigns the result to the left operand.
    # Note: c *= a is equivalent to c = c * a
    NewX *= (xmax-xmin)/NewX.max()

    # Adds right operand to the left operand and assigns the result to the left operand.
    # Note: c += a is equivalent to c = c + a
    NewX += xmin

    # Subtracts right operand from the left operand and assigns the result to the left operand.
    # Note: c -= a is equivalent to c = c - a
    NewY -= y.min()

    # Multiplies right operand with the left operand and assigns the result to the left operand.
    # Note: c *= a is equivalent to c = c * a
    NewY *= (ymax-ymin)/NewY.max()

    # Adds right operand to the left operand and assigns the result to the left operand.
    # Note: c += a is equivalent to c = c + a
    NewY += ymin

    return (NewX,NewY)

# Declare raw data for use in creating logistic regression equation
x = np.array([821,576,473,377,326],dtype='float') 
y = np.array([255,235,208,166,157],dtype='float') 

# Call resize() function to re-calculate coordinates that will be used for equation
MinRR=GetMinRR(50)
MaxRR=1200
minLVET=(y[4]/x[4])*MinRR
maxLVET=(y[0]/x[0])*MaxRR

#x,y=resize(x,y,xmin=0.3, ymin=0.3) 
NewX,NewY=resize(x,y,xmin=MinRR,xmax=MaxRR,ymin=minLVET,ymax=maxLVET) 
print 'x is:  ',x 
print 'y is:  ',y
print 'NewX is:  ',NewX
print 'NewY is:  ',NewY

# p_guess is the starting estimate for the minimization
p_guess=(np.median(x),np.median(y),1.0,1.0) 
New_p_guess=(np.median(NewX),np.median(NewY),1.0,1.0) 

# Calls the leastsq() function, which calls the residuals function with an initial
# guess for the parameters and with the x and y vectors.  The full_output means that
# the function returns all optional outputs.  Note that the residuals function also
# calls the sigmoid function.  This will return the parameters p that minimize the
# least squares error of the sigmoid function with respect to the original x and y
# coordinate vectors that are sent to it.
p, cov, infodict, mesg, ier = scipy.optimize.leastsq( 
    residuals,p_guess,args=(x,y),full_output=1,warning=True)   

New_p, New_cov, New_infodict, New_mesg, New_ier = scipy.optimize.leastsq( 
    residuals,New_p_guess,args=(NewX,NewY),full_output=1,warning=True)   

# Define the optimal values for each element of p that were returned by the leastsq() function.
x0,y0,c,k=p 
print('''Reference data:\ 
x0 = {x0} 
y0 = {y0} 
c = {c} 
k = {k} 
'''.format(x0=x0,y0=y0,c=c,k=k)) 

New_x0,New_y0,New_c,New_k=New_p 
print('''New data:\ 
New_x0 = {New_x0} 
New_y0 = {New_y0} 
New_c = {New_c} 
New_k = {New_k} 
'''.format(New_x0=New_x0,New_y0=New_y0,New_c=New_c,New_k=New_k))

# Create a numpy array of x-values
xp = np.linspace(x.min(), x.max(), x.max()-x.min())
New_xp = np.linspace(NewX.min(), NewX.max(), NewX.max()-NewX.min())
# Return a vector pxp containing all the y values corresponding with the x-values in xp
pxp=sigmoid(p,xp)
New_pxp=sigmoid(New_p,New_xp)

# Plot the results 
plt.plot(x, y, '>', xp, pxp, 'g-')
plt.plot(NewX, NewY, '^',New_xp, New_pxp, 'r-')
plt.xlabel('x')
plt.ylabel('y',rotation='horizontal')
plt.grid(True)
plt.show()
试试这个:

import numpy as np 
import matplotlib.pyplot as plt 
import scipy.optimize 

def GetMinRR(age):
    MaxHR = 208-(0.7*age)
    MinRR = (60/MaxHR)*1000
    return MinRR

def sigmoid(p,x):
    x0,y0,c,k=p 
    y = c / (1 + np.exp(-k*(x-x0))) + y0 
    return y 

def residuals(p,x,y): 
    return y - sigmoid(p,x) 

def resize(arr,lower=0.0,upper=1.0):
    # Create local copy
    result=arr.copy()
    # If the mins are greater than the maxs, then flip them.
    if lower>upper: lower,upper=upper,lower
    #----------------------------------------------------------------------------------------------    
    # The rest of the code below re-calculates all the values in x and then in y with these steps:
    #       1.) Subtract the actual minimum of the input x-vector from each value of x
    #       2.) Multiply each resulting value of x by the result of dividing the difference
    #           between the new xmin and xmax by the actual maximum of the input x-vector
    #       3.) Add the new minimum to each value of x
    #----------------------------------------------------------------------------------------------    
    # Subtracts right operand from the left operand and assigns the result to the left operand.
    # Note: c -= a is equivalent to c = c - a
    result -= result.min()

    # Multiplies right operand with the left operand and assigns the result to the left operand.
    # Note: c *= a is equivalent to c = c * a
    result *= (upper-lower)/result.max()

    # Adds right operand to the left operand and assigns the result to the left operand.
    # Note: c += a is equivalent to c = c + a
    result += lower
    return result


# Declare raw data for use in creating logistic regression equation
x = np.array([821,576,473,377,326],dtype='float') 
y = np.array([255,235,208,166,157],dtype='float') 

# Call resize() function to re-calculate coordinates that will be used for equation
MinRR=GetMinRR(50)
MaxRR=1200
# x[-1] returns the last value in x
minLVET=(y[-1]/x[-1])*MinRR
maxLVET=(y[0]/x[0])*MaxRR

print(MinRR, MaxRR)
#x,y=resize(x,y,xmin=0.3, ymin=0.3) 
NewX=resize(x,lower=MinRR,upper=MaxRR)
NewY=resize(y,lower=minLVET,upper=maxLVET) 
print 'x is:  ',x 
print 'y is:  ',y
print 'NewX is:  ',NewX
print 'NewY is:  ',NewY

# p_guess is the starting estimate for the minimization
p_guess=(np.median(x),np.min(y),np.max(y),0.01) 
New_p_guess=(np.median(NewX),np.min(NewY),np.max(NewY),0.01) 

# Calls the leastsq() function, which calls the residuals function with an initial
# guess for the parameters and with the x and y vectors.  The full_output means that
# the function returns all optional outputs.  Note that the residuals function also
# calls the sigmoid function.  This will return the parameters p that minimize the
# least squares error of the sigmoid function with respect to the original x and y
# coordinate vectors that are sent to it.
p, cov, infodict, mesg, ier = scipy.optimize.leastsq( 
    residuals,p_guess,args=(x,y),full_output=1,warning=True)   

New_p, New_cov, New_infodict, New_mesg, New_ier = scipy.optimize.leastsq( 
    residuals,New_p_guess,args=(NewX,NewY),full_output=1,warning=True)   

# Define the optimal values for each element of p that were returned by the leastsq() function.
x0,y0,c,k=p 
print('''Reference data:\ 
x0 = {x0} 
y0 = {y0} 
c = {c} 
k = {k} 
'''.format(x0=x0,y0=y0,c=c,k=k)) 

New_x0,New_y0,New_c,New_k=New_p 
print('''New data:\ 
New_x0 = {New_x0} 
New_y0 = {New_y0} 
New_c = {New_c} 
New_k = {New_k} 
'''.format(New_x0=New_x0,New_y0=New_y0,New_c=New_c,New_k=New_k))

# Create a numpy array of x-values
xp = np.linspace(x.min(), x.max(), x.max()-x.min())
New_xp = np.linspace(NewX.min(), NewX.max(), NewX.max()-NewX.min())
# Return a vector pxp containing all the y values corresponding with the x-values in xp
pxp=sigmoid(p,xp)
New_pxp=sigmoid(New_p,New_xp)

# Plot the results 
plt.plot(x, y, '>', xp, pxp, 'g-')
plt.plot(NewX, NewY, '^',New_xp, New_pxp, 'r-')
plt.xlabel('x')
plt.ylabel('y',rotation='horizontal')
plt.grid(True)
plt.show()

您的其他相关问题尚未结束,您似乎已注册两次,stackoverflow不允许您编辑其他问题,因为它无法识别与相同的问题

我在上面代码中所做的主要工作就是修改
新的\u p\u猜测

为最初的猜测找到正确的值是一门艺术。如果它可以通过算法实现,scipy就不会要求你这么做了。一点分析会有所帮助,也会让你对数据有一种“感觉”。提前知道解决方案大致应该是什么样子,因此在问题的上下文中什么值是合理的也会有所帮助。(这只是说我猜到了选择k=0.01的方法。)

你以10个不同用户的名义问了20个问题。你从来没有接受过一个答案,甚至没有感谢过任何人。我建议是时候想想怎么玩得好了。这不仅是因为我想得到感谢,等等,而且知道一个问题什么时候得到回答,什么时候没有得到回答也是很有用的。@tom10:这十个不同的用户帐户显然是无意的。他们都有相同的名字和头像,都被标记为“未注册用户”。我不知道这是怎么发生的,但这似乎不是故意的。@Sven-显然,你似乎是对的。但是有了所有这些问题和所有这些用户帐户,似乎是时候让这个用户了解这个系统是如何工作的了。