如何使用java客户端请求tensorflow服务;深度模型或如何使用java加载广域&;深度模型与预测?
我已经通过tensor flow成功地编写了python客户机来请求广度和深度模型,但我怀疑如何使用java来解决它,因为示例和文档太少了。 使用python我已经成功地运行了它,因为它可以传递Features Dict来告诉Tensor flow如何处理特性。如流:如何使用java客户端请求tensorflow服务;深度模型或如何使用java加载广域&;深度模型与预测?,java,tensorflow,client,tensorflow-serving,Java,Tensorflow,Client,Tensorflow Serving,我已经通过tensor flow成功地编写了python客户机来请求广度和深度模型,但我怀疑如何使用java来解决它,因为示例和文档太少了。 使用python我已经成功地运行了它,因为它可以传递Features Dict来告诉Tensor flow如何处理特性。如流: example = tf.train.Example(features=tf.train.Features(feature=feature_dict)) serialized = example.SerializeToStr
example = tf.train.Example(features=tf.train.Features(feature=feature_dict))
serialized = example.SerializeToString()
request.inputs['inputs'].CopyFrom(
tf.contrib.util.make_tensor_proto(serialized, shape=[1]))
result_future = stub.Predict.future(request, 1.0)
但是使用java我不知道如何通过特征dict告诉tensor flow_如何处理特征。我已经编写了java客户端,但是遇到了流错误,我没有通过特征映射
Nov 09, 2017 7:18:09 AM com.bj58.gul.model.entity.TestWideAndDeepModelClient predict
WARNING: RPC failed: Status{code=INVALID_ARGUMENT, description=Name: <unknown>, Feature: getGBDTDiffTimeBetweenItemShowTimeAndCreatedTime (data type: float) is required but could not be found.
[[Node: ParseExample/ParseExample = ParseExample[Ndense=15, Nsparse=66, Tdense=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _output_shapes=[[?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?], [?]...TRUNCATED, cause=null}
Nov 09, 2017 7:18:09 AM io.grpc.internal.ManagedChannelImpl maybeTerminateChannel
INFO: [ManagedChannelImpl@3cb5cdba] Terminated
End of predict client
2017年11月9日上午7:18:09 com.bj58.gul.model.entity.testwide和deepmodel客户端预测
警告:RPC失败:状态{code=INVALID_参数,description=Name:,功能:getGBDTDiffTimeBetweenItemShowTimeAndCreatedTime(数据类型:float)是必需的,但找不到。
[[Node:ParseExample/ParseExample=ParseExample[Dendese=15,Nsparse=66,Tdense=[Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT,Dtu FLOAT],[2],[2],[2],[2],[2],[2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2], [?,2],[?,2],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[?],[
2017年11月9日上午7:18:09 io.grpc.internal.ManagedChannel Impl可能会终止频道
信息:[ManagedChannelImpl@3cb5cdba]终止
预测客户端结束
这里是一段伪代码片段,我设法使wide&deep model client(java)工作
前提条件:您已将导出的模型置于服务()上
import com.google.common.collect.ImmutableMap;
导入com.google.protobuf.ByteString;
导入com.google.protobuf.Int64Value;
导入org.tensorflow.example.*;
导入org.tensorflow.framework.DataType;
导入org.tensorflow.framework.TensorProto;
导入org.tensorflow.framework.TensorShapeProto;
导入org.tensorflow.framework.TensorShapeProto.Dim;
导入tensorflow.serving.Model.ModelSpec;
导入tensorflow.serving.Predict.PredictRequest;
导入tensorflow.serving.Predict.PredictResponse;
导入tensorflow.serving.PredictionServiceGrpc.PredictionServiceBlockingStub;
……(这里忽略了类和函数的声明,只显示了下面的核心部分)
私有静态最终PredictionServiceBlockingStub=PredictionServiceGrpc.newBlockingStub(新强制死线通道(TF_服务_主机,TF_服务_端口,5000));
私有HashMap inputFeatureMap=新HashMap();
私有ByteString输入str;
整数modelVer=123;
......
对于(要素列表中的每个要素){
特征=null;
if(类型为string){
feature=feature.newBuilder().setByTestList(BytesList.newBuilder().addValue(ByteString.copyFromUtf8(“伪字符串”)).build();
}else if(输入if浮点){
feature=feature.newBuilder().setFloatList(FloatList.newBuilder().addValue(3.1415f)).build();
}else if(类型为int){
feature=feature.newBuilder().setInt64List(Int64List.newBuilder().addValue(1l)).build();
}
如果(功能!=null){
inputFeatureMap.put(名称、特征);
}
Features=Features.newBuilder().PutalFeature(inputFeatureMap.build();
inputStr=Example.newBuilder().setFeatures(features).build().toByteString();
}
TensorProto proto=TensorProto.newBuilder()
.addStringVal(inputStr)
.setTensorShape(TensorShapeProto.newBuilder().addDim(TensorShapeProto.Dim.newBuilder().setSize(1.build()).build())
.setDtype(数据类型.DT_字符串)
.build();
PredictRequest req=PredictRequest.newBuilder()
.setModelSpec(ModelSpec.newBuilder()
.setName(“您的服务型号名称”)
.setSignatureName(“默认服务”)
.setVersion(Int64Value.newBuilder().setValue(modelVer)))
.putalInputs(不可变映射(“输入”,proto))
.build();
PredictResponse响应=存根预测(req);
System.out.println(response.getOutputsMap());
......
非常感谢,我用同样的方法解决了这个问题
import com.google.common.collect.ImmutableMap;
import com.google.protobuf.ByteString;
import com.google.protobuf.Int64Value;
import org.tensorflow.example.*;
import org.tensorflow.framework.DataType;
import org.tensorflow.framework.TensorProto;
import org.tensorflow.framework.TensorShapeProto;
import org.tensorflow.framework.TensorShapeProto.Dim;
import tensorflow.serving.Model.ModelSpec;
import tensorflow.serving.Predict.PredictRequest;
import tensorflow.serving.Predict.PredictResponse;
import tensorflow.serving.PredictionServiceGrpc.PredictionServiceBlockingStub;
......(here the declare of class and function is neglected, only showing the core part below)
private static final PredictionServiceBlockingStub stub = PredictionServiceGrpc.newBlockingStub(new ForceDeadlineChannel(TF_SERVICE_HOST, TF_SERVICE_PORT, 5000));
private HashMap<String, Feature> inputFeatureMap = new HashMap();
private ByteString inputStr;
Integer modelVer = 123;
......
for (each feature in feature list) {
Feature feature = null;
if (type is string) {
feature = Feature.newBuilder().setBytesList(BytesList.newBuilder().addValue(ByteString.copyFromUtf8("dummy string"))).build();
} else if (type if float) {
feature = Feature.newBuilder().setFloatList(FloatList.newBuilder().addValue(3.1415f)).build();
} else if (type is int) {
feature = Feature.newBuilder().setInt64List(Int64List.newBuilder().addValue(1l)).build();
}
if (feature != null) {
inputFeatureMap.put(name, feature);
}
Features features = Features.newBuilder().putAllFeature(inputFeatureMap).build();
inputStr = Example.newBuilder().setFeatures(features).build().toByteString();
}
TensorProto proto = TensorProto.newBuilder()
.addStringVal(inputStr)
.setTensorShape(TensorShapeProto.newBuilder().addDim(TensorShapeProto.Dim.newBuilder().setSize(1).build()).build())
.setDtype(DataType.DT_STRING)
.build();
PredictRequest req = PredictRequest.newBuilder()
.setModelSpec(ModelSpec.newBuilder()
.setName("your serving model name")
.setSignatureName("serving_default")
.setVersion(Int64Value.newBuilder().setValue(modelVer)))
.putAllInputs(ImmutableMap.of("inputs", proto))
.build();
PredictResponse response = stub.predict(req);
System.out.println(response.getOutputsMap());
......