• Docs >
  • Troubleshooting Guide
Shortcuts

Troubleshooting Guide

Refer to this section for common issues faced while deploying your Pytorch models using Torchserve and their corresponding troubleshooting steps.

Deployment and config issues

“Failed to bind to address: http://127.0.0.1:8080”, port 8080/8081 already in use.

Usually, the port number 8080/8081 is already used by some other application or service, it can be verified by using cmd ss -ntl | grep 8080. There are two ways to troubleshoot this issue either kill the process which is using port 8080/8081 or run Torchserve on different ports other than 8080 & 8081.

Refer configuration.md for more details.

Relevant issues: [542]

“java.lang.NoSuchMethodError” when starting Torchserve.[473]

This error usually occurs when Java 17 is not installed or used. Java 17 is required by Torchserve and older java versions are not supported.

Relevant issues: [#473]

Unable to send big files for inference request?

The default max request size and response size is roughly 6.5 Mb. Hence any file size greater than 6.5mb cannot be uploaded. To resolve this update max_request_size and max_response_size in a config.properties file and start torchserve with this config file.

$ cat config.properties
max_request_size=<request size in bytes>
max_response_size=<response size in bytes>
$ torchserve --start --model-store model_store --ts-config /path/to/config.properties

You can also use environment variables to set these values. Refer configuration.md for more details. Relevant issues: [#335]

Model-archiver

How can add model specific custom dependency?

You can add your dependency files using --extra-files flag while creating a mar file. These dependency files can be of any type like zip, egg, json etc. You may have to write a custom handler to use these files as required.

Relevant issues: [#566]

How can I resolve model specific python dependency?

You can provide a requirements.txt while creating a mar file using “–requirements-file/ -r” flag. You can refer to the waveglow text-to-speech-synthesizer example

Relevant issues: [#566] Refer Torch model archiver cli for more details.

I have added requirements.txt in my mar file but the packages listed are not getting installed.

By default model specific custom python packages feature is disabled, enable this by setting install_py_dep_per_model to true. Refer Allow model specific custom python packages for more details.

Backend worker monitoring thread interrupted or backend worker process died error.

This issue mostly occurs when the model fails to initialize, which may be due to erroneous code in handler’s initialize function. This error is also observed when there is missing package/module.

Relevant issues: [#667, #537]

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources