2. Troubleshooting Guide¶
Refer to this section for common issues faced while deploying your Pytorch models using Torchserve and their corresponding troubleshooting steps.
2.1. Deployment and config issues¶
2.1.1. “Failed to bind to address: http://127.0.0.1:8080”, port 8080/8081 already in use.¶
Usually, the port number 8080/8081 is already used by some other application or service, it can be verified by using cmd
ss -ntl | grep 8080. There are two ways to troubleshoot this issue either kill the process which is using port 8080/8081 or run Torchserve on different ports other than 8080 & 8081.
Refer configuration.md for more details.
Relevant issues: 
This error usually occurs when Java 11 is not installed or used. Java 11 is required by Torchserve and older java versions are not supported.
Relevant issues: [#473]
2.1.3. Unable to send big files for inference request?¶
The default max request size and response size is roughly 6.5 Mb. Hence any file size greater than 6.5mb cannot be uploaded.
To resolve this update
max_response_size in a config.properties file and start torchserve with this config file.
$ cat config.properties max_request_size=<request size in bytes> max_response_size=<response size in bytes> $ torchserve --start --model-store model_store --ts-config /path/to/config.properties
2.4.1. How can add model specific custom dependency?¶
You can add your dependency files using
--extra-files flag while creating a mar file. These dependency files can be of any type like zip, egg, json etc. You may have to write a custom handler to use these files as required.
Relevant issues: [#566]
2.4.2. How can I resolve model specific python dependency?¶
You can provide a requirements.txt while creating a mar file using “–requirements-file/ -r” flag. You can refer to the waveglow text-to-speech-synthesizer example
2.4.3. I have added requirements.txt in my mar file but the packages listed are not getting installed.¶
By default model specific custom python packages feature is disabled, enable this by setting install_py_dep_per_model to true. Refer Allow model specific custom python packages for more details.
2.4.4. Backend worker monitoring thread interrupted or backend worker process died error.¶
This issue is moslty occurs when the model fails to initialize, which may be due to erroneous code in handler’s initialize function. This error is also observed when there is missing package/module.