Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ 未定义的符号';固定地址空字符串';:基于protobuf的新张量流运算_C++_Tensorflow_Shared Libraries_Protocol Buffers - Fatal编程技术网

C++ 未定义的符号';固定地址空字符串';:基于protobuf的新张量流运算

C++ 未定义的符号';固定地址空字符串';:基于protobuf的新张量流运算,c++,tensorflow,shared-libraries,protocol-buffers,C++,Tensorflow,Shared Libraries,Protocol Buffers,我想创建一个新的操作,可以与 外部python进程。此时,我创建了一个新的操作 它通过protobuf发送到python进程“hello world” 在这个小例子中,我发送了一个字符串。将来我想 发送更复杂的数据,比如特征矩阵,这就是我选择的原因 protobuf。(以及可能的“轻松集成到tensorflow中”) msg.proto: package prototest; message Foo { required string bar = 1; } #include "tenso

我想创建一个新的操作,可以与 外部python进程。此时,我创建了一个新的操作 它通过
protobuf
发送到python进程“hello world”

在这个小例子中,我发送了一个字符串。将来我想 发送更复杂的数据,比如特征矩阵,这就是我选择的原因
protobuf
。(以及可能的“轻松集成到tensorflow中”)

msg.proto

package prototest;

message Foo {
  required string bar = 1;
}
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/platform/default/logging.h"
#include "tensorflow/core/framework/shape_inference.h"

// to send serialized data through UPD socket
#include <sys/socket.h>
#include <arpa/inet.h>

// generated header file from protoc
#include "msg.pb.h"

namespace tensorflow{
    namespace shape_inference{

        Status HelloWorldShape(InferenceContext* c){
            std::cout << "shape_infernce is done" << std::endl;
            return Status::OK();
        }
        REGISTER_OP("HelloWorld")
            .SetShapeFn(HelloWorldShape)
            .Doc(R"doc(HelloWorld operation)doc");
    } // end namespace shape_inference

    class HelloWorldOp : public OpKernel {
    public :
        // constructor
        explicit HelloWorldOp(OpKernelConstruction* context) : OpKernel(context) {
            std::cout << "HelloWorldOp constructor" << std::endl;
        }

        void Compute(OpKernelContext* context) override {
            std::cout << "Start Compute method" << std::endl;
            //-----------------------------------------------------------------
            // send something to a Python process with protobuf
            struct sockaddr_in addr;
            addr.sin_family = AF_INET;
            inet_aton("127.0.0.1", &addr.sin_addr);
            addr.sin_port = htons(5555);

            // initialise a foo and set some properties
            GOOGLE_PROTOBUF_VERIFY_VERSION;

            prototest::Foo foo;
            foo.set_bar("Hello World");

            // serialise to string, this one is obvious ; )
            std::string buf;
            foo.SerializeToString(&buf);

            int sock = socket(PF_INET, SOCK_DGRAM, 0);
            sendto(sock, buf.data(), buf.size(), 0, (struct sockaddr *)&addr, sizeof(addr));
            //------------------------------------------------------------------
            std::cout << "Compute method is done" << std::endl;
        }
    };
    REGISTER_KERNEL_BUILDER(Name("HelloWorld").Device(DEVICE_CPU), HelloWorldOp);
} // end namespace tensorflow
#!/usr/bin/env python3.5

# Demo from https://github.com/tensorflow/tensorflow/issues/10950

from __future__ import print_function
import os
import sys
import tensorflow as tf


my_dir = os.path.dirname(os.path.abspath(__file__))
so_filename = "lib_hello_world.so"
cc_filename = "hello_world.cc"


def compile():
    # Fix for undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringE.
    # https://github.com/tensorflow/tensorflow/issues/1419
    from google.protobuf.pyext import _message as msg
    lib = msg.__file__
    ld_flags = [
        "-Xlinker", "-rpath", "-Xlinker", os.path.dirname(lib),
        "-L", os.path.dirname(lib), "-l", ":" + os.path.basename(lib)]
    common_opts = ["-shared", "-O2", "-std=c++11"]
    if sys.platform == "darwin":
        common_opts += ["-undefined", "dynamic_lookup"]
    common_opts += ["-I", tf.sysconfig.get_include()]
    common_opts += ["-fPIC"]
    common_opts += ["-D_GLIBCXX_USE_CXX11_ABI=0"]  # might be obsolete in the future
    opts = common_opts + [cc_filename, "-o", so_filename]
    opts += ld_flags
    cmd_bin = "g++"
    cmd_args = [cmd_bin] + opts
    from subprocess import Popen, PIPE, STDOUT, CalledProcessError
    print("compile call: %s" % " ".join(cmd_args))
    proc = Popen(cmd_args, stdout=PIPE, stderr=STDOUT)
    stdout, stderr = proc.communicate()
    assert stderr is None  # should only have stdout
    if proc.returncode != 0:
      print("compile failed: %s" % cmd_bin)
      print("Original stdout/stderr:")
      print(stdout)
      raise CalledProcessError(returncode=proc.returncode, cmd=cmd_args)
    assert os.path.exists(so_filename)


def main():
    print("TensorFlow version:", tf.GIT_VERSION, tf.VERSION)
    os.chdir(my_dir)
    compile()
    mod = tf.load_op_library("%s/%s" % (my_dir, so_filename))


if __name__ == "__main__":
    main()
  • protoc msg.proto--cpp_out=--python_out=。
  • 生成:
    msg.pb.cc msg.pb.h msg_pb2.py
你好_world.cc

package prototest;

message Foo {
  required string bar = 1;
}
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/platform/default/logging.h"
#include "tensorflow/core/framework/shape_inference.h"

// to send serialized data through UPD socket
#include <sys/socket.h>
#include <arpa/inet.h>

// generated header file from protoc
#include "msg.pb.h"

namespace tensorflow{
    namespace shape_inference{

        Status HelloWorldShape(InferenceContext* c){
            std::cout << "shape_infernce is done" << std::endl;
            return Status::OK();
        }
        REGISTER_OP("HelloWorld")
            .SetShapeFn(HelloWorldShape)
            .Doc(R"doc(HelloWorld operation)doc");
    } // end namespace shape_inference

    class HelloWorldOp : public OpKernel {
    public :
        // constructor
        explicit HelloWorldOp(OpKernelConstruction* context) : OpKernel(context) {
            std::cout << "HelloWorldOp constructor" << std::endl;
        }

        void Compute(OpKernelContext* context) override {
            std::cout << "Start Compute method" << std::endl;
            //-----------------------------------------------------------------
            // send something to a Python process with protobuf
            struct sockaddr_in addr;
            addr.sin_family = AF_INET;
            inet_aton("127.0.0.1", &addr.sin_addr);
            addr.sin_port = htons(5555);

            // initialise a foo and set some properties
            GOOGLE_PROTOBUF_VERIFY_VERSION;

            prototest::Foo foo;
            foo.set_bar("Hello World");

            // serialise to string, this one is obvious ; )
            std::string buf;
            foo.SerializeToString(&buf);

            int sock = socket(PF_INET, SOCK_DGRAM, 0);
            sendto(sock, buf.data(), buf.size(), 0, (struct sockaddr *)&addr, sizeof(addr));
            //------------------------------------------------------------------
            std::cout << "Compute method is done" << std::endl;
        }
    };
    REGISTER_KERNEL_BUILDER(Name("HelloWorld").Device(DEVICE_CPU), HelloWorldOp);
} // end namespace tensorflow
#!/usr/bin/env python3.5

# Demo from https://github.com/tensorflow/tensorflow/issues/10950

from __future__ import print_function
import os
import sys
import tensorflow as tf


my_dir = os.path.dirname(os.path.abspath(__file__))
so_filename = "lib_hello_world.so"
cc_filename = "hello_world.cc"


def compile():
    # Fix for undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringE.
    # https://github.com/tensorflow/tensorflow/issues/1419
    from google.protobuf.pyext import _message as msg
    lib = msg.__file__
    ld_flags = [
        "-Xlinker", "-rpath", "-Xlinker", os.path.dirname(lib),
        "-L", os.path.dirname(lib), "-l", ":" + os.path.basename(lib)]
    common_opts = ["-shared", "-O2", "-std=c++11"]
    if sys.platform == "darwin":
        common_opts += ["-undefined", "dynamic_lookup"]
    common_opts += ["-I", tf.sysconfig.get_include()]
    common_opts += ["-fPIC"]
    common_opts += ["-D_GLIBCXX_USE_CXX11_ABI=0"]  # might be obsolete in the future
    opts = common_opts + [cc_filename, "-o", so_filename]
    opts += ld_flags
    cmd_bin = "g++"
    cmd_args = [cmd_bin] + opts
    from subprocess import Popen, PIPE, STDOUT, CalledProcessError
    print("compile call: %s" % " ".join(cmd_args))
    proc = Popen(cmd_args, stdout=PIPE, stderr=STDOUT)
    stdout, stderr = proc.communicate()
    assert stderr is None  # should only have stdout
    if proc.returncode != 0:
      print("compile failed: %s" % cmd_bin)
      print("Original stdout/stderr:")
      print(stdout)
      raise CalledProcessError(returncode=proc.returncode, cmd=cmd_args)
    assert os.path.exists(so_filename)


def main():
    print("TensorFlow version:", tf.GIT_VERSION, tf.VERSION)
    os.chdir(my_dir)
    compile()
    mod = tf.load_op_library("%s/%s" % (my_dir, so_filename))


if __name__ == "__main__":
    main()
这将返回:

TensorFlow version: v1.2.0-rc2-21-g12f033d 1.2.0
compile call: g++ -shared -O2 -std=c++11 -I /usr/local/lib/python3.5/dist-packages/tensorflow/include -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 hello_world.cc -o lib_hello_world.so -Xlinker -rpath -Xlinker /usr/local/lib/python3.5/dist-packages/protobuf-3.2.0-py3.5-linux-x86_64.egg/google/protobuf/pyext -L /usr/local/lib/python3.5/dist-packages/protobuf-3.2.0-py3.5-linux-x86_64.egg/google/protobuf/pyext -l :_message.cpython-35m-x86_64-linux-gnu.so
Traceback (most recent call last):
  File "./compile_and_test.py", line 55, in <module>
    main()
  File "./compile_and_test.py", line 51, in main
    mod = tf.load_op_library("%s/%s" % (my_dir, so_filename))
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/load_library.py", line 64, in load_op_library
    None, None, error_msg, error_code)
tensorflow.python.framework.errors_impl.NotFoundError: /src/ext_hello_world/lib_hello_world.so: undefined symbol: _ZN6google8protobuf8internal26fixed_address_empty_stringE
返回:
/u/zeyer/.local/lib/python2.7/site packages/google/protobuf/pyext/\u message.so

在我的例子中是:
/usr/local/lib/python3.5/dist packages/protobuf-3.2.0-py3.5-linux-x86_64.egg/google/protobuf/pyext/_message.cpython-35m-x86_64-linux-gnu.so
这个关于protobuf的文件似乎完全不同

关于protoc安装的注意事项
protoc
(protobuf编译器)未安装。 我确定了tensorflow中使用的protobuf版本:v3.2.0

之后,我遵循protobuf安装说明(C++和Python实现)

系统信息
  • docker:docker版本1.12.6,构建78d1802
  • 图片:tensorflow/tensorflow:1.2.0-devel-gpu-py3
  • 基于:ubuntu 16.04(4.4.0-78-generic)
  • 从源代码构建tensorflow
  • tensorflow版本:1.2.0
  • python版本:python 3.5.2
  • 巴泽尔版本:
    
    生成标签:0.4.5
    构建目标:bazel out/local fastbuild/bin/src/main/java/com/google/devtools/Build/lib/bazel/BazelServer_deploy.jar
    建造时间:2017年3月16日星期四12:19:38(1489666778)
    生成时间戳:1489666778
    生成时间戳为int:1489666778
    
  • gcc-v: ``` 使用内置规格。 收集\u GCC=GCC COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/5/LTO WRAPPER 目标:x86_64-linux-gnu 配置为:../src/configure-v--with pkgversion='Ubuntu 5.4.0-6ubuntu1~16.04.4'--with bugurl=file:///usr/share/doc/gcc-5/README.Bugs ——启用语言= C、艾达、C++、java、GO、D、FORTRAN、Objc,obj-c++--prefix=/usr--program suffix=-5--enable shared--enable linker build id--libexecdir=/usr/lib--include gettext--enable threads=posix--libdir=/usr/lib--enable nls--sysroot=/--enable clocale=gnu--enable libstdcxx debug--enable libstdcxx time=yes缺省libstdcxx abi=new enable gnu唯一对象--禁用vtable verify--enable libmpx--enable plugin--with system zlib--enable browser plugin--enable java awt=gtk--enable gtk cairo--with java home=/usr/lib/jvm/java-1.5.0-gcj-5-amd64--with java-jar dir=/usr/lib/jvm-exports/java-1.5.0-gcj-5-amd64--enable java-home----使用arch directory=amd64——使用ecj jar=/usr/share/java/eclipse-ecj.jar——启用objc-gc——启用多拱——禁用werror——使用-arch-32=i686——使用abi=m64——使用多库列表=m32、m64、mx32——启用多库——使用tune=generic——启用检查=release——构建=x86_64-linux-gnu——主机=x86_64-linux-gnu——目标=x86_64-linux-gnu 线程模型:posix gcc版本5.4.0 20160609(Ubuntu 5.4.0-6ubuntu1~16.04.4)
资料来源:


我认为TensorFlow导出的符号不足以依赖它来实现协议缓冲区。相反,您可以根据已安装的系统协议缓冲区实现动态链接。这可能需要匹配TensorFlow协议缓冲区版本,以避免符号冲突

我正在使用TensorFlow 1.3.0(Python 3.4)进行测试,使用libprotoc 3.3.0。我添加了代码> > MSG.P.C.(<代码> Prtoc MSGPROT-CPPPyOUT.= -PythOnOutOut.= /Cux>)到编译命令(在消息的C++实现中链接),以及“代码> PKG配置-CFLAGS——LIBS原Buff的输出,以链接到系统协议缓冲库。这个程序对我来说很有效:

protoc msg.proto --cpp_out=. --python_out=.
g++ -shared -O2 -std=c++11 -I /usr/local/lib/python3.4/dist-packages/tensorflow/include -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 hello_world.cc msg.pb.cc -o lib_hello_world.so $(pkg-config --cflags --libs protobuf)
python3 -c "import tensorflow as tf; tf.Session().run(tf.load_op_library('./lib_hello_world.so').hello_world())"

因为您已经序列化了协议缓冲区,所以在Python中反序列化它应该没有问题;次要protobuf版本不需要匹配,因为它是有线格式。

我认为TensorFlow导出的符号不足以依赖它来实现协议缓冲区。相反,您可以根据已安装的系统协议缓冲区实现动态链接。这可能需要匹配TensorFlow协议缓冲区版本,以避免符号冲突

我正在使用TensorFlow 1.3.0(Python 3.4)进行测试,使用libprotoc 3.3.0。我添加了代码> > MSG.P.C.(<代码> Prtoc MSGPROT-CPPPyOUT.= -PythOnOutOut.= /Cux>)到编译命令(在消息的C++实现中链接),以及“代码> PKG配置-CFLAGS——LIBS原Buff的输出,以链接到系统协议缓冲库。这个程序对我来说很有效:

protoc msg.proto --cpp_out=. --python_out=.
g++ -shared -O2 -std=c++11 -I /usr/local/lib/python3.4/dist-packages/tensorflow/include -fPIC -D_GLIBCXX_USE_CXX11_ABI=0 hello_world.cc msg.pb.cc -o lib_hello_world.so $(pkg-config --cflags --libs protobuf)
python3 -c "import tensorflow as tf; tf.Session().run(tf.load_op_library('./lib_hello_world.so').hello_world())"

因为您已经序列化了协议缓冲区,所以在Python中反序列化它应该没有问题;次要protobuf版本不需要匹配,因为它是有线格式的。

它来自与dll库libprotobuf[d].lib的链接,没有指定-DPROTOBUF\u USE\u dll


请参阅自述文件中的“dll与静态链接”以了解更多信息。

它来自对dll库libprotobuf[d].lib的链接,但未指定-DPROTOBUF\u USE\DLLs

有关更多信息,请参阅自述文件中的“DLL与静态链接”