Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow skylake-avx512从源代码中编译,缺少_cpu_型号符号_Python_Tensorflow_Gcc_Ubuntu 16.04_Gcc9 - Fatal编程技术网

Python Tensorflow skylake-avx512从源代码中编译,缺少_cpu_型号符号

Python Tensorflow skylake-avx512从源代码中编译,缺少_cpu_型号符号,python,tensorflow,gcc,ubuntu-16.04,gcc9,Python,Tensorflow,Gcc,Ubuntu 16.04,Gcc9,我使用skylake-avx512从源代码处编译tensorflow,如下所示,我的python是这样构建的: git clone https://github.com/python/cpython.git && cd cpython && git checkout 2.7 CXX="/usr/bin/g++" CXXFLAGS="-O3 -mtune=skylake-avx512 -march=skylake-avx512" CFLAGS="-O3 -mtun

我使用skylake-avx512从源代码处编译tensorflow,如下所示,我的python是这样构建的:


git clone https://github.com/python/cpython.git && cd cpython && git checkout 2.7
CXX="/usr/bin/g++" CXXFLAGS="-O3 -mtune=skylake-avx512 -march=skylake-avx512" CFLAGS="-O3 -mtune=skylake-avx512 -march=skylake-avx512" ./configure  \
            --enable-optimizations  \
            --with-lto \
            --enable-unicode=ucs4  \
            --with-threads \
            --with-libs="-lbz2 -lreadline -lncurses -lhistory -lsqlite3 -lssl" \
            --enable-shared \
            --with-system-expat \
            --with-system-ffi   \
            --with-ensurepip=yes \
            --enable-unicode=ucs4 \
            --disable-ipv6
RUN cd /opt/cpython && make -j16
RUN cd /opt/cpython && make install

Tensorflow生成命令:

bazel build   --copt=-O3  --copt=-mtune=skylake-avx512 --copt=-march=skylake-avx512        //tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt

只有选项集是XLA JIT,其他所有的设置都是“否”。我正在使用tensorflow v1.12.0-devel的docker映像,并复制tag v1.12.3

完整性:

WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
You have bazel 0.15.0 installed.
Please specify the location of python. [Default is /usr/local/bin/python]: 


Found possible Python library paths:
  /usr/local/lib/python2.7/site-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python2.7/site-packages]

Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: n
No Apache Ignite support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [Y/n]: Y
XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.

Do you wish to build TensorFlow with MPI support? [y/N]: n
No MPI support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: -O3 -mtune=skylake-avx512 -march=skylake-avx512


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See tools/bazel.rc for more details.
    --config=mkl            # Build with MKL support.
    --config=monolithic     # Config for mostly static monolithic build.
    --config=gdr            # Build with GDR support.
    --config=verbs          # Build with libverbs support.
    --config=ngraph         # Build with Intel nGraph support.
Configuration finished
警告:--批处理模式已弃用。请改为使用命令“Bazel shutdown”显式关闭Bazel服务器。
您已经安装了bazel 0.15.0。
请指定python的位置。[默认值为/usr/local/bin/python]:
找到可能的Python库路径:
/usr/local/lib/python2.7/site-packages
请输入所需的Python库路径以供使用。默认值为[/usr/local/lib/python2.7/site-packages]
您希望使用ApacheIgnite支持构建TensorFlow吗?[是/否]:否
不会为TensorFlow启用Apache Ignite支持。
您希望用XLA JIT支持构建张量流吗?[是/否]:是
XLA JIT支持将为TensorFlow启用。
您是否希望使用OpenCL SYCL支持构建TensorFlow?[是/否]:否
不会为TensorFlow启用OpenCL SYCL支持。
您是否希望使用ROCm支持构建TensorFlow?[是/否]:否
TensorFlow将不启用ROCm支持。
您是否希望使用CUDA支持构建TensorFlow?[是/否]:否
TensorFlow将不启用CUDA支持。
您想下载新版本的clang吗?(实验)[是/否]:否
不会下载叮当声。
您是否希望使用MPI支持构建TensorFlow?[是/否]:否
不会为TensorFlow启用MPI支持。
请指定编译期间在指定bazel选项“-config=opt”时使用的优化标志[默认为-march=native]:-O3-mtune=skylake-avx512-march=skylake-avx512
是否要以交互方式为Android版本配置./WORKSPACE?[是/否]:否
未为Android版本配置工作区。
预配置的Bazel构建配置。您可以通过在build命令中添加“-config=”来使用以下任一选项。有关详细信息,请参见tools/bazel.rc。
--config=mkl#使用mkl支持构建。
--配置=单片#配置主要用于静态单片构建。
--config=gdr#使用gdr支持构建。
--config=verbs#使用libverbs支持构建。
--config=ngraph#使用英特尔ngraph支持构建。
配置完成
我正在复制gcc-9、g++-9和ubuntu 16.04。在此之前,我已经解决了几个问题,但我无法找出我在这里遗漏了什么。有人能帮我解决这个丢失的符号吗

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: /usr/local/lib/python2.7/site-packages/tensorflow/python/../libtensorflow_framework.so: undefined symbol: __cpu_model


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.```






回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/usr/local/lib/python2.7/site packages/tensorflow/_init__.py”,第24行,在
从tensorflow.python导入pywrapu tensorflow 35; pylint:disable=未使用的导入
文件“/usr/local/lib/python2.7/site packages/tensorflow/python/_init__.py”,第49行,在
从tensorflow.python导入pywrap\u tensorflow
文件“/usr/local/lib/python2.7/site packages/tensorflow/python/pywrap_tensorflow.py”,第74行,在
提高效率(msg)
ImportError:回溯(最近一次呼叫上次):
文件“/usr/local/lib/python2.7/site packages/tensorflow/python/pywrap_tensorflow.py”,第58行,在
从tensorflow.python.pywrap\u tensorflow\u内部导入*
文件“/usr/local/lib/python2.7/site packages/tensorflow/python/pywrap\u tensorflow\u internal.py”,第28行,在
_pywrap\u tensorflow\u internal=swig\u import\u helper()
swig\u import\u helper中的文件“/usr/local/lib/python2.7/site packages/tensorflow/python/pywrap\u tensorflow\u internal.py”,第24行
_mod=imp.load_模块(“pywrap_tensorflow_internal”,fp,路径名,描述)
ImportError:/usr/local/lib/python2.7/site packages/tensorflow/python/。/libtensorflow\u framework.so:未定义的符号:\uuuuu cpu\u模型
未能加载本机TensorFlow运行时。
看见https://www.tensorflow.org/install/errors
因为一些常见的原因和解决方案。包括整个堆栈跟踪
以上是请求帮助时的错误消息```

我解决了这个问题

发生这种情况的原因是,我正在一个容器中构建tensorflow,获取轮子文件并在另一个容器中安装tensorflow

除非tensorflow的所有关联库都以相同的方式构建,即包括正确的符号和符号/库的版本,否则在构建tensorflow的容器和使用tensorflow的容器中都会出现类似的问题。我在另一个容器以及其他库中构建了python、numpy和pandas。在我从源代码构建这些库之后,当然,在系统上安装了相同的标记版本、相同的编译器标志和包,在tensorflow容器中,我的所有问题都解决了,tensorflow工作正常

奇怪的是,tensorflow过去需要80多分钟来构建,在编译python和其他一些东西之后,现在构建大约需要35分钟。很甜蜜