Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/310.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras NN-域名排名猜测(损失=nan)_Python_Tensorflow_Machine Learning_Keras_Neural Network - Fatal编程技术网

Python Keras NN-域名排名猜测(损失=nan)

Python Keras NN-域名排名猜测(损失=nan),python,tensorflow,machine-learning,keras,neural-network,Python,Tensorflow,Machine Learning,Keras,Neural Network,我是神经网络的新手,我正在尝试创建一个模型来猜测域名的等级/价值。我有一个域名列表和他们的排名(从10到4.9) 首先,我添加了一些指标,比如元音的数量等 但是,当模型进行训练时,在第一个历元之后显示损失:nan和精度:0.000。我不确定问题出在哪里。如果有任何建议,我将不胜感激。我想问题之一是我的输出不是二进制的 from keras.layers import Dense from keras.models import Sequential import pandas as pd imp

我是神经网络的新手,我正在尝试创建一个模型来猜测域名的等级/价值。我有一个域名列表和他们的排名(从10到4.9)

首先,我添加了一些指标,比如元音的数量等

但是,当模型进行训练时,在第一个历元之后显示损失:nan和精度:0.000。我不确定问题出在哪里。如果有任何建议,我将不胜感激。我想问题之一是我的输出不是二进制的

from keras.layers import Dense
from keras.models import Sequential
import pandas as pd
import sklearn as sklearn
from sklearn.model_selection import train_test_split
import tld

TOP_DOMAINS_PATH = 'domainrank.csv'
domains_df = pd.read_csv(TOP_DOMAINS_PATH, nrows=100000)

# add features
domains_df['tld'] = domains_df['Domain'].apply(lambda x: tld.get_tld(x, fix_protocol=True,fail_silently=True))
domains_df['sld'] = domains_df['Domain'].apply(lambda x: getattr(tld.get_tld(x, fix_protocol=True, as_object=True,fail_silently=True),'domain',None))
domains_df['dots'] = domains_df['sld'].str.count('\.')
domains_df['vowels_count'] = domains_df['sld'].str.count('[aeiouy]')
domains_df['cons_count'] = domains_df['sld'].str.lower().str.count(r'[a-z]') - domains_df['vowels_count']
domains_df['length'] = domains_df['sld'].str.len()
domains_df['rank_normalized'] = domains_df['Open Page Rank'].apply(lambda x: x/10)

# remove not used columns
domains_df.pop('Open Page Rank')
domains_df.pop('Domain')
domains_df.pop('tld')
domains_df.pop('sld')

dataset = domains_df.values
X = dataset[:, 0:len(domains_df.columns) - 1, ]
Y = dataset[:, len(domains_df.columns) - 1]

min_max_scaler = sklearn.preprocessing.MinMaxScaler()
X_scale = min_max_scaler.fit_transform(X)
X_train, X_val_and_test, Y_train, Y_val_and_test = train_test_split(X_scale, Y, test_size=0.3)
X_val, X_test, Y_val, Y_test = train_test_split(X_val_and_test, Y_val_and_test, test_size=0.5)

model = Sequential([
    Dense(32, activation='relu', input_shape=(5,)),
    Dense(32, activation='relu'),
    Dense(1, activation='sigmoid'),
])

model.compile(optimizer='sgd',
              loss='binary_crossentropy',
              metrics=['accuracy'])

hist = model.fit(X_train, Y_train,
          batch_size=32, epochs=100,
          validation_data=(X_val, Y_val))
修改
domains\u df
后,如下所示:

       dots  vowels_count  cons_count  length  rank_normalized
0       0.0           1.0         4.0     5.0             1.00
1       0.0           4.0         4.0     8.0             1.00
2       0.0           2.0         5.0     7.0             1.00
3       0.0           5.0         2.0     7.0             1.00
4       0.0           3.0         3.0     6.0             1.00
     ...           ...         ...     ...              ...
99995   0.0           2.0         2.0     4.0             0.49
99996   0.0           6.0        10.0    18.0             0.49
99997   0.0           4.0         4.0     8.0             0.49
99998   0.0           6.0        10.0    16.0             0.49
99999   0.0           3.0         7.0    10.0             0.49
培训和产出:

Epoch 1/100
   32/70000 [..............................] - ETA: 2:05 - loss: 0.6929 - accuracy: 0.0000e+00
 3264/70000 [>.............................] - ETA: 2s - loss: 0.6911 - accuracy: 3.0637e-04  
 6624/70000 [=>............................] - ETA: 1s - loss: 0.6903 - accuracy: 3.0193e-04
10080/70000 [===>..........................] - ETA: 1s - loss: 0.6899 - accuracy: 1.9841e-04
13472/70000 [====>.........................] - ETA: 1s - loss: 0.6896 - accuracy: 1.4846e-04
16832/70000 [======>.......................] - ETA: 0s - loss: 0.6895 - accuracy: 1.1882e-04
20384/70000 [=======>......................] - ETA: 0s - loss: 0.6894 - accuracy: 9.8116e-05
23808/70000 [=========>....................] - ETA: 0s - loss: 0.6892 - accuracy: 8.4005e-05
27328/70000 [==========>...................] - ETA: 0s - loss: 0.6892 - accuracy: 1.0978e-04
30784/70000 [============>.................] - ETA: 0s - loss: 0.6891 - accuracy: 9.7453e-05
34144/70000 [=============>................] - ETA: 0s - loss: 0.6891 - accuracy: 1.1715e-04
37536/70000 [===============>..............] - ETA: 0s - loss: 0.6890 - accuracy: 1.0656e-04
40992/70000 [================>.............] - ETA: 0s - loss: nan - accuracy: 9.7580e-05   
44480/70000 [==================>...........] - ETA: 0s - loss: nan - accuracy: 8.9928e-05
47968/70000 [===================>..........] - ETA: 0s - loss: nan - accuracy: 8.3389e-05
51296/70000 [====================>.........] - ETA: 0s - loss: nan - accuracy: 7.7979e-05
54688/70000 [======================>.......] - ETA: 0s - loss: nan - accuracy: 7.3142e-05
58112/70000 [=======================>......] - ETA: 0s - loss: nan - accuracy: 6.8833e-05
61440/70000 [=========================>....] - ETA: 0s - loss: nan - accuracy: 6.5104e-05
64832/70000 [==========================>...] - ETA: 0s - loss: nan - accuracy: 6.1698e-05
68288/70000 [============================>.] - ETA: 0s - loss: nan - accuracy: 5.8575e-05
70000/70000 [==============================] - 1s 18us/step - loss: nan - accuracy: 5.7143e-05 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 2/100
   32/70000 [..............................] - ETA: 3s - loss: nan - accuracy: 0.0000e+00
 3392/70000 [>.............................] - ETA: 1s - loss: nan - accuracy: 0.0000e+00
 6848/70000 [=>............................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
10336/70000 [===>..........................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
13792/70000 [====>.........................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
17216/70000 [======>.......................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
20544/70000 [=======>......................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
23968/70000 [=========>....................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
27456/70000 [==========>...................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
30912/70000 [============>.................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
34368/70000 [=============>................] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
37824/70000 [===============>..............] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
41280/70000 [================>.............] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
44672/70000 [==================>...........] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
48064/70000 [===================>..........] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
51392/70000 [=====================>........] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
54848/70000 [======================>.......] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
58208/70000 [=======================>......] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
61376/70000 [=========================>....] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
64384/70000 [==========================>...] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
67456/70000 [===========================>..] - ETA: 0s - loss: nan - accuracy: 0.0000e+00
70000/70000 [==============================] - 1s 17us/step - loss: nan - accuracy: 0.0000e+00 - val_loss: nan - val_accuracy: 0.0000e+00
Epoch 3/100

Y
的形状和范围是什么?您使用了
'sigmoid'
,这意味着模型预期
Y
范围为0到1。--对于
'binary\u crossentropy'
返回
nan
,发生了一些奇怪的事情,例如1-接收到的非数值;2-分解或除以零(可能发生值超出预期范围);3-由于初始化不好、优化器不好、自定义丢失不好等原因,模型的权重出现了问题。@DanielMöller这是domains_df中的最后一列-它是一个从4.9到10的浮动列表,所以我将其除以10-它是0.49到1.0。@DanielMöller我使用了df.dropna()函数,现在它不返回NAN。但是损失总是在0.68左右,我认为这不是好的。如果你的值不是0和1,你就不会得到“二进制交叉熵”的好结果。如果你的问题不是分类,那么准确度也是没有意义的您可以尝试将
'mse'
'mae'
作为损失,如果使用这些损失,您甚至可以删除sigmoid。您可能希望在每个“relu”之前和“sigmoid”之前添加
批标准化
层,您的模型可能会饱和,因为您使用的是非标准化值。