Quickstart with a HelloWorld Example

    HelloWorld is a simple image classification application that demonstrates how to use PyTorch Android API. This application runs TorchScript serialized TorchVision pretrained resnet18 model on static image which is packaged inside the app as android asset.

    1. Model Preparation

    Let’s start with model preparation. If you are familiar with PyTorch, you probably should already know how to train and save your model. In case you don’t, we are going to use a pre-trained image classification model (Resnet18), which is packaged in TorchVision. To install it, run the command below:

    pip install torchvision

    To serialize the model you can use python script in the root folder of HelloWorld app:

    import torch
    import torchvision
    model = torchvision.models.resnet18(pretrained=True)
    example = torch.rand(1, 3, 224, 224)
    traced_script_module = torch.jit.trace(model, example)"app/src/main/assets/")

    If everything works well, we should have our model - generated in the assets folder of android application. That will be packaged inside android application as asset and can be used on the device.

    More details about TorchScript you can find in tutorials on

    2. Cloning from github

    git clone
    cd HelloWorldApp

    If Android SDK and Android NDK are already installed you can install this application to the connected android device or emulator with:

    ./gradlew installDebug

    We recommend you to open this project in Android Studio 3.5.1+. At the moment PyTorch Android and demo applications use android gradle plugin of version 3.5.0, which is supported only by Android Studio version 3.5.1 and higher. Using Android Studio you will be able to install Android NDK and Android SDK with Android Studio UI.

    3. Gradle dependencies

    Pytorch android is added to the HelloWorld as gradle dependencies in build.gradle:

    repositories {
    dependencies {
        implementation 'org.pytorch:pytorch_android:1.4.0'
        implementation 'org.pytorch:pytorch_android_torchvision:1.4.0'

    Where org.pytorch:pytorch_android is the main dependency with PyTorch Android API, including libtorch native library for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64). Further in this doc you can find how to rebuild it only for specific list of android abis.

    org.pytorch:pytorch_android_torchvision - additional library with utility functions for converting and to tensors.

    4. Reading image from Android Asset

    All the logic happens in org.pytorch.helloworld.MainActivity. As a first step we read image.jpg to using the standard Android API.

    Bitmap bitmap = BitmapFactory.decodeStream(getAssets().open("image.jpg"));

    5. Loading TorchScript Module

    Module module = Module.load(assetFilePath(this, ""));

    org.pytorch.Module represents torch::jit::script::Module that can be loaded with load method specifying file path to the serialized to file model.

    6. Preparing Input

    Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(bitmap,

    org.pytorch.torchvision.TensorImageUtils is part of org.pytorch:pytorch_android_torchvision library. The TensorImageUtils#bitmapToFloat32Tensor method creates tensors in the torchvision format using as a source.

    All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]

    inputTensor’s shape is 1x3xHxW, where H and W are bitmap height and width appropriately.

    7. Run Inference

    Tensor outputTensor = module.forward(IValue.from(inputTensor)).toTensor();
    float[] scores = outputTensor.getDataAsFloatArray();

    org.pytorch.Module.forward method runs loaded module’s forward method and gets result as org.pytorch.Tensor outputTensor with shape 1x1000.

    8. Processing results

    Its content is retrieved using org.pytorch.Tensor.getDataAsFloatArray() method that returns java array of floats with scores for every image net class.

    After that we just find index with maximum score and retrieve predicted class name from ImageNetClasses.IMAGENET_CLASSES array that contains all ImageNet classes.

    float maxScore = -Float.MAX_VALUE;
    int maxScoreIdx = -1;
    for (int i = 0; i < scores.length; i++) {
      if (scores[i] > maxScore) {
        maxScore = scores[i];
        maxScoreIdx = i;
    String className = ImageNetClasses.IMAGENET_CLASSES[maxScoreIdx];

    In the following sections you can find detailed explanations of PyTorch Android API, code walk through for a bigger demo application, implementation details of the API, how to customize and build it from source.

    PyTorch Demo Application

    We have also created another more complex PyTorch Android demo application that does image classification from camera output and text classification in the same github repo.

    To get device camera output it uses Android CameraX API. All the logic that works with CameraX is separated to class.

    void setupCameraX() {
        final PreviewConfig previewConfig = new PreviewConfig.Builder().build();
        final Preview preview = new Preview(previewConfig);
        preview.setOnPreviewOutputUpdateListener(output -> mTextureView.setSurfaceTexture(output.getSurfaceTexture()));
        final ImageAnalysisConfig imageAnalysisConfig =
            new ImageAnalysisConfig.Builder()
                .setTargetResolution(new Size(224, 224))
        final ImageAnalysis imageAnalysis = new ImageAnalysis(imageAnalysisConfig);
            (image, rotationDegrees) -> {
              analyzeImage(image, rotationDegrees);
        CameraX.bindToLifecycle(this, preview, imageAnalysis);
      void analyzeImage(, int rotationDegrees)

    Where the analyzeImage method process the camera output,

    It uses the aforementioned TensorImageUtils.imageYUV420CenterCropToFloat32Tensor method to convert in YUV420 format to input tensor.

    After getting predicted scores from the model it finds top K classes with the highest scores and shows on the UI.

    Language Processing Example

    Another example is natural language processing, based on an LSTM model, trained on a reddit comments dataset. The logic happens in TextClassificattionActivity.

    Result class names are packaged inside the TorchScript model and initialized just after initial module initialization. The module has a get_classes method that returns List[str], which can be called using method Module.runMethod(methodName):

        mModule = Module.load(moduleFileAbsoluteFilePath);
        IValue getClassesOutput = mModule.runMethod("get_classes");

    The returned IValue can be converted to java array of IValue using IValue.toList() and processed to an array of strings using IValue.toStr():

        IValue[] classesListIValue = getClassesOutput.toList();
        String[] moduleClasses = new String[classesListIValue.length];
        int i = 0;
        for (IValue iv : classesListIValue) {
          moduleClasses[i++] = iv.toStr();

    Entered text is converted to java array of bytes with UTF-8 encoding. Tensor.fromBlobUnsigned creates tensor of dtype=uint8 from that array of bytes.

        byte[] bytes = text.getBytes(Charset.forName("UTF-8"));
        final long[] shape = new long[]{1, bytes.length};
        final Tensor inputTensor = Tensor.fromBlobUnsigned(bytes, shape);

    Running inference of the model is similar to previous examples:

    Tensor outputTensor = mModule.forward(IValue.from(inputTensor)).toTensor()

    After that, the code processes the output, finding classes with the highest scores.

    Building PyTorch Android from Source

    In some cases you might want to use a local build of pytorch android, for example you may build custom libtorch binary with another set of operators or to make local changes.

    For this you can use ./scripts/ script.

    git clone
    cd pytorch
    sh ./scripts/

    The workflow contains several steps:

    1. Build libtorch for android for all 4 android abis (armeabi-v7a, arm64-v8a, x86, x86_64)

    2. Create symbolic links to the results of those builds: android/pytorch_android/src/main/jniLibs/${abi} to the directory with output libraries android/pytorch_android/src/main/cpp/libtorch_include/${abi} to the directory with headers. These directories are used to build library that will be loaded on android device.

    3. And finally run gradle in android/pytorch_android directory with task assembleRelease

    Script requires that Android SDK, Android NDK and gradle are installed. They are specified as environment variables:

    ANDROID_HOME - path to Android SDK

    ANDROID_NDK - path to Android NDK

    GRADLE_HOME - path to gradle

    After successful build you should see the result as aar file:

    $ find pytorch_android/build/ -type f -name *aar

    It can be used directly in android projects, as a gradle dependency:

    allprojects {
        repositories {
            flatDir {
                dirs 'libs'
    android {
        packagingOptions {
            pickFirst "**/"
    dependencies {
        implementation(name:'pytorch_android', ext:'aar')
        implementation(name:'pytorch_android_torchvision', ext:'aar')
        implementation(name:'pytorch_android_fbjni', ext:'aar')
        implementation ''
        implementation 'com.facebook.soloader:nativeloader:0.8.0'

    At the moment for the case of using aar files directly we need additional configuration due to packaging specific ( is packaged in both pytorch_android_fbjni.aar and pytorch_android.aar).

    packagingOptions {
        pickFirst "**/"

    Also we have to add all transitive dependencies of our aars. As pytorch_android depends on and com.facebook.soloader:nativeloader:0.8.0, we need to add them. (In case of using maven dependencies they are added automatically from pom.xml).

    Custom Build

    To reduce the size of binaries you can do custom build of PyTorch Android with only set of operators required by your model. This includes two steps: preparing the list of operators from your model, rebuilding pytorch android with specified list.

    1. Verify your PyTorch version is 1.4.0 or above. You can do that by checking the value of torch.__version__.

    2. Preparation of the list of operators

    List of operators of your serialized torchscript model can be prepared in yaml format using python api function torch.jit.export_opnames(). To dump the operators in your model, say MobileNetV2, run the following lines of Python code:

    # Dump list of operators used by MobileNetV2:
    import torch, yaml
    model = torch.jit.load('')
    ops = torch.jit.export_opnames(model)
    with open('MobileNetV2.yaml', 'w') as output:
        yaml.dump(ops, output)

    3. Building PyTorch Android with prepared operators list.

    To build PyTorch Android with the prepared yaml list of operators, specify it in the environment variable SELECTED_OP_LIST. Also in the arguments, specify which Android ABIs it should build; by default it builds all 4 Android ABIs.

    # Build PyTorch Android library customized for MobileNetV2:
    SELECTED_OP_LIST=MobileNetV2.yaml scripts/ arm64-v8a

    After successful build you can integrate the result aar files to your android gradle project, following the steps from previous section of this tutorial (Building PyTorch Android from Source).

    API Docs

    You can find more details about the PyTorch Android API in the Javadoc.