Shortcuts

Passing Python Objects from one Interpreter to Another

Here we use torch::deploy to create a ReplicatedObj in order to pass an Obj from one interpreter to another.

Moving Python Objects Between Interpreters

// Basic example of using `ReplicatedObject` in `torch::deploy`.
#include <multipy/runtime/deploy.h>
#include <multipy/runtime/path_environment.h>
#include <torch/script.h>
#include <torch/torch.h>

#include <iostream>
#include <memory>

int main(int argc, const char* argv[]) {
  torch::deploy::InterpreterManager m(4);

  try {
    // Load the model from the torch.package.
    auto I = m.acquireOne();
    std::vector<torch::jit::IValue> constructor_inputs;
    auto model_obj = I.global("torch.nn", "Conv2d")({6, 2, 2, 1});
    auto rObj = m.createMovable(model_obj, &I);
    auto I2 = m.acquireOne();
    auto model_obj2 = I2.fromMovable(rObj);
    rObj.unload(); // free the replicated object

    // Create a vector of inputs.
    std::vector<torch::jit::IValue> inputs;
    inputs.push_back(torch::ones({1, 6, 6, 6}));

    // Execute the model and turn its output into a tensor.
    at::Tensor output = model_obj2(inputs).toIValue().toTensor();
    std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';

  } catch (const c10::Error& e) {
    std::cerr << "error creating movable\n";
    std::cerr << e.msg();
    return -1;
  }

  std::cout << "ok\n";
}

Here we highlight torch::deploy::InterpreterManager::create_movable(Obj, InterpreterSession*) and InterpreterSession::fromMovable(const ReplicatedObj&). These functions allow conversions between ``Obj`s` which are speciifc to an interpreter and ``ReplicatedObj``s which are replicated across multiple interpreters.

Build and execute

Assuming the above C++ program was stored in a file called movable_example.cpp, a minimal CMakeLists.txt file would look like:

cmake_minimum_required(VERSION 3.12 FATAL_ERROR)
project(multipy_tutorial)

set(MULTIPY_PATH ".." CACHE PATH "The repo where multipy is built or the PYTHONPATH")

# include the multipy utils to help link against
include(${MULTIPY_PATH}/multipy/runtime/utils.cmake)

# add headers from multipy
include_directories(${MULTIPY_PATH})

# link the multipy prebuilt binary
add_library(multipy_internal STATIC IMPORTED)
set_target_properties(multipy_internal
    PROPERTIES
    IMPORTED_LOCATION
    ${MULTIPY_PATH}/multipy/runtime/build/libtorch_deploy.a)
caffe2_interface_library(multipy_internal multipy)

# build our examples
add_executable(movable_example movable_example/movable_example.cpp)
target_link_libraries(movable_example PUBLIC "-Wl,--no-as-needed -rdynamic" dl pthread util multipy c10 torch_cpu)

From here we execute the hello world program

mkdir build
cd build
cmake -S . -B build/ -DMULTIPY_PATH="<Path to Multipy Library>" -DPython3_EXECUTABLE="$(which python3)" && \
cmake --build build/ --config Release -j
./hello_world

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources