Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/330.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow LinearClassifier()总是猜测负类_Python_Flask_Machine Learning_Tensorflow_Logistic Regression - Fatal编程技术网

Python Tensorflow LinearClassifier()总是猜测负类

Python Tensorflow LinearClassifier()总是猜测负类,python,flask,machine-learning,tensorflow,logistic-regression,Python,Flask,Machine Learning,Tensorflow,Logistic Regression,根据他们的“广泛”教程,我目前正在使用tensorflow实现逻辑回归: 我的代码与教程非常匹配,但是,当我在模型上运行predict()时,它每次都猜测负类,大约占数据的77%。我怎样才能让我的模型做出一些积极的猜测呢?我没有规范化,所以方差应该是最大值。文档的准确率为84%,我使用的是完全相同的数据集。出了什么问题?以下是培训代码: def train_logistic_model(training_path, response, predictors, num_labels):

根据他们的“广泛”教程,我目前正在使用tensorflow实现逻辑回归:
我的代码与教程非常匹配,但是,当我在模型上运行predict()时,它每次都猜测负类,大约占数据的77%。我怎样才能让我的模型做出一些积极的猜测呢?我没有规范化,所以方差应该是最大值。文档的准确率为84%,我使用的是完全相同的数据集。出了什么问题?以下是培训代码:

def train_logistic_model(training_path, response, predictors, num_labels):

    # Get csv
    df_train = pd.read_csv(training_path, header=0)

    # Sanitize column names
    unsanitized_column_names = df_train.columns.values
    column_names = []
    for col in unsanitized_column_names:
        column_names.append(re.sub('[^A-Za-z0-9]+', '', col))

    # Update dataframe with sanitized column names
    df_train = pd.read_csv(training_path, names=column_names, skiprows=1)

    # Slice off %10 of training data to test with
    df_test = df_train.loc[(len(df_train.index) * .9):]
    df_train = df_train.loc[:(len(df_train.index) * .9)]

    response_name = column_names[response]

    LABEL_COLUMN = "label"
    df_train[LABEL_COLUMN] = (df_train[response_name].apply(lambda x: ">50K" in x)).astype(int)
    df_test[LABEL_COLUMN] = (df_test[response_name].apply(lambda x: ">50K" in x)).astype(int)

    del df_train[response_name]
    del df_test[response_name]

    # remove NaN elements
    df_train = df_train.dropna(how='any', axis=0)
    df_test = df_test.dropna(how='any', axis=0)

    CATEGORICAL_COLUMNS = []
    CONTINUOUS_COLUMNS = []
    for key, value in predictors.items():
        if value == 'Categorical':
            CATEGORICAL_COLUMNS.append(column_names[key])
        elif value == 'Continuous':
            CONTINUOUS_COLUMNS.append(column_names[key])

    # Input bulder function
    def input_fn(df):
        continuous_cols = {k: tf.constant(df[k].values) for k in CONTINUOUS_COLUMNS}

        categorical_cols = {
            k: tf.SparseTensor(
                indices=[[i, 0] for i in range(df[k].size)],
                values=df[k].values,
                dense_shape=[df[k].size, 1]) for k in CATEGORICAL_COLUMNS
            }

        # Merges the two dictionaries into one.
        feature_cols = {**continuous_cols, **categorical_cols}

        label = tf.constant(df[LABEL_COLUMN].values)

        return feature_cols, label

    def train_input_fn():
        return input_fn(df_train)

    def eval_input_fn_test():
        return input_fn(df_test)

    cat_tensors = []
    for col in CATEGORICAL_COLUMNS:
        cat_tensors.append(tf.contrib.layers.sparse_column_with_hash_bucket(
            column_name=col, hash_bucket_size=100))

    cont_tensors = []
    for cont in CONTINUOUS_COLUMNS:
        cont_tensors.append(tf.contrib.layers.real_valued_column(cont))

    feature_columns = cat_tensors + cont_tensors

    model_dir = tempfile.mkdtemp()

    logistic_model = tf.contrib.learn.LinearClassifier(feature_columns=feature_columns, n_classes=num_labels, model_dir=model_dir)

    logistic_model.fit(input_fn=train_input_fn, steps=200)

    # Test the model on reserve data
    eval_result_test = logistic_model.evaluate(input_fn=eval_input_fn_test, steps=1)

    # Test the model on training data
    eval_result_train = logistic_model.evaluate(input_fn=eval_input_fn_train, steps=1)

    for key in sorted(eval_result_train):
        print("%s: %s" % (key, eval_result_train[key]))

    return eval_result_test, model_dir

我想,您需要添加交叉列,以使我们的线性模型工作得更好