Installation¶
Building torch::deploy
via Docker¶
The easiest way to build torch::deploy
, along with fetching all interpreter
dependencies, is to do so via docker.
git clone https://github.com/pytorch/multipy.git
cd multipy
export DOCKER_BUILDKIT=1
docker build -t multipy .
The built artifacts are located in multipy/runtime/build
.
To run the tests:
docker run --rm multipy multipy/runtime/build/test_deploy
Installing via pip install
¶
We support installing both the python modules and the c++ bits (through CMake
)
using a single pip install -e .
command, with the caveat of having to manually
install the dependencies first.
First clone multipy and update the submodules:
git clone https://github.com/pytorch/multipy.git
cd multipy
git submodule sync && git submodule update --init --recursive
Installing system dependencies¶
The runtime system dependencies are specified in build-requirements-{debian,centos8}.txt
.
To install them on Debian-based systems, one could run:
sudo apt update
xargs sudo apt install -y -qq --no-install-recommends < build-requirements-debian.txt
While to install on a CentOS 8 system:
xargs sudo dnf install -y < build-requirements-centos8.txt
Installing environment encapsulators¶
We recommend using the isolated python environments of either conda
or pyenv + virtualenv
because torch::deploy
requires a
position-independent version of python to launch interpreters with. For
conda
environments we use the prebuilt libpython-static=3.x
libraries from conda-forge
to link with at build time. For
virtualenv
/pyenv
, we compile python with the -fPIC
flag to create the
linkable library.
Warning
While torch::deploy supports Python versions 3.7 through 3.10,
the libpython-static
libraries used with conda
environments
are only available for 3.8
onwards. With virtualenv
/pyenv
any version from 3.7 through 3.10 can be
used, as python can be built with the -fPIC
flag explicitly.
Running pip install
¶
Once all the dependencies are successfully installed,
including a -fPIC
enabled build of python and the latest nightly of pytorch, we
can run the following, in either conda
or virtualenv
, to install
both the python modules and the runtime/interpreter libraries:
# from base torch::deploy directory
pip install -e .
# alternatively one could run
python setup.py develop
The C++ binaries should be available in /opt/dist
.
Alternatively, one can install only the python modules without invoking
cmake
as follows:
# from base multipy directory
pip install -e . --install-option="--cmakeoff"
Warning
As of 10/11/2022 the linking of prebuilt static -fPIC
versions of python downloaded from conda-forge
can be problematic
on certain systems (for example Centos 8), with linker errors like
libpython_multipy.a: error adding symbols: File format not recognized
.
This seems to be an issue with binutils
, and these steps
can help. Alternatively, the user can go with the
virtualenv
/pyenv
flow above.
Running torch::deploy
build steps from source¶
Both docker
and pip install
options above are wrappers around
the cmake build of torch::deploy. If the user wishes to run the
build steps manually instead, as before the dependencies would have to
be installed in the user’s (isolated) environment of choice first. After
that the following steps can be executed:
Building¶
# checkout repo
git checkout https://github.com/pytorch/multipy.git
git submodule sync && git submodule update --init --recursive
cd multipy
# install python parts of `torch::deploy` in multipy/multipy/utils
pip install -e . --install-option="--cmakeoff"
cd multipy/runtime
# build runtime
mkdir build
cd build
# use cmake -DABI_EQUALS_1=ON .. instead if you want ABI=1
cmake ..
cmake --build . --config Release
Running unit tests for torch::deploy
¶
We first need to generate the neccessary examples. First make sure your
python enviroment has torch. Afterwards, once
torch::deploy
is built, run the following (executed automatically
for docker
and pip
above):
cd multipy/multipy/runtime
python example/generate_examples.py
cd build
./test_deploy