python – 使用Tensorflow中的多层感知器模型预测文本标签

前端之家收集整理的这篇文章主要介绍了python – 使用Tensorflow中的多层感知器模型预测文本标签前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我正在按照教程进行操作,可以浏览代码,该代码可以训练神经网络并评估其准确性.

但我不知道如何在新的单个输入(字符串)上使用训练模型来预测其标签.

你能告诉我们如何做到这一点吗?

教程:

https://medium.freecodecamp.org/big-picture-machine-learning-classifying-text-with-neural-networks-and-tensorflow-d94036ac2274

会话代码

# Launch the graph
with tf.Session() as sess:
    sess.run(init)

    # Training cycle
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(len(newsgroups_train.data)/batch_size)
        # Loop over all batches
        for i in range(total_batch):
            batch_x,batch_y = get_batch(newsgroups_train,i,batch_size)
            # Run optimization op (backprop) and cost op (to get loss value)
            c,_ = sess.run([loss,optimizer],Feed_dict={input_tensor: batch_x,output_tensor:batch_y})
            # Compute average loss
            avg_cost += c / total_batch
        # Display logs per epoch step
        if epoch % display_step == 0:
            print("Epoch:",'%04d' % (epoch+1),"loss=",\
                "{:.9f}".format(avg_cost))
    print("Optimization Finished!")

    # Test model
    correct_prediction = tf.equal(tf.argmax(prediction,1),tf.argmax(output_tensor,1))
    # Calculate accuracy
    accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))
    total_test_data = len(newsgroups_test.target)
    batch_x_test,batch_y_test = get_batch(newsgroups_test,total_test_data)
    print("Accuracy:",accuracy.eval({input_tensor: batch_x_test,output_tensor: batch_y_test}))

我有一些Python的经验,但基本上没有Tensorflow的经验.

解决方法

首先,我们需要将文本转换为数组:
def text_to_vector(text):
    layer = np.zeros(total_words,dtype=float)
    for word in text.split(' '):
        layer[word2index[word.lower()]] += 1

    return layer

# Convert text to vector so we can send it to our model
vector_txt = text_to_vector(text)
# Wrap vector like we do in get_batches()
input_array = np.array([vector_txt])

我们可以保存并加载模型以供重用.我们首先创建一个Saver对象然后保存会话(在训练模型之后):

saver = tf.train.Saver()
... train the model ...
save_path = saver.save(sess,"/tmp/model.ckpt")

在示例模型中,模型体系结构中的最后一个“步骤”(即在polym_perceptron方法中完成的最后一件事)是:

'out': tf.Variable(tf.random_normal([n_classes]))

因此,为了获得预测,我们得到该数组的最大值的索引(预测类):

saver = tf.train.Saver()

with tf.Session() as sess:
    saver.restore(sess,"/tmp/model.ckpt")
    print("Model restored.")

    classification = sess.run(tf.argmax(prediction,Feed_dict={input_tensor: input_array})
    print("Predicted category:",classification)

您可以在这里查看整个代码https://github.com/dmesquita/understanding_tensorflow_nn

原文链接:https://www.f2er.com/python/186082.html

猜你在找的Python相关文章