• Docs >
  • Logging in Torchserve
Shortcuts

Logging in Torchserve

In this document we explain logging in TorchServe. We also explain how to modify the behavior of logging in the model server. Logging in TorchServe also covers metrics, as metrics are logged into a file. To further understand how to customize metrics or define custom logging layouts, see Metrics on TorchServe

Prerequisites

  • Be familiar with log4j2 configuration. For information on how to configure log4j parameters, see Logging Services.

  • Be familiar with the default log4j2.xml used by TorchServe.

Types of logs

TorchServe currently provides the following types of logs

  1. Access logs

  2. TorchServe logs

Access Logs

These logs collect the access pattern to TorchServe. The configuration for access logs are as follows:

		<RollingFile
				name="access_log"
				fileName="${env:LOG_LOCATION:-logs}/access_log.log"
				filePattern="${env:LOG_LOCATION:-logs}/access_log.%d{dd-MMM}.log.gz">
			<PatternLayout pattern="%d{ISO8601} - %m%n"/>
			<Policies>
				<SizeBasedTriggeringPolicy size="100 MB"/>
				<TimeBasedTriggeringPolicy/>
			</Policies>
			<DefaultRolloverStrategy max="5"/>
		</RollingFile>

As defined in the properties file, the access logs are collected in {LOG_LOCATION}/access_log.log file. When you load TorchServe with a model and run inference against the server, the following logs are collected into the access_log.log:

2018-10-15 13:56:18,976 [INFO ] BackendWorker-9000 ACCESS_LOG - /127.0.0.1:64003 "POST /predictions/resnet-18 HTTP/1.1" 200 118

The above log tells us that a successful POST call to /predictions/resnet-18 was made by remote host 127.0.0.1:64003 it took 118ms to complete this request.

These logs are useful to determine the current performance of the model-server as well as understand the requests received by model-server.

TorchServe Logs

These logs collect all the logs from TorchServe and from the backend workers (the custom model code). The default configuration pertaining to TorchServe logs are as follows:

		<RollingFile
				name="ts_log"
				fileName="${env:LOG_LOCATION:-logs}/ts_log.log"
				filePattern="${env:LOG_LOCATION:-logs}/ts_log.%d{dd-MMM}.log.gz">
			<PatternLayout pattern="%d{ISO8601} [%-5p] %t %c - %m%n"/>
			<Policies>
				<SizeBasedTriggeringPolicy size="100 MB"/>
				<TimeBasedTriggeringPolicy/>
			</Policies>
			<DefaultRolloverStrategy max="5"/>
		</RollingFile>

This configuration by default dumps all the logs above DEBUG level.

Generate custom logs

You might want to generate custom logs. This could be for debugging purposes or to log any errors. To do this, print the required logs to stdout/stderr. TorchServe captures the logs generated by the backend workers and logs it into the log file. Some examples of logs are as follows:

  1. Messages printed to stderr:

2018-10-14 16:46:51,656 [WARN ] W-9000-stderr org.pytorch.serve.wlm.WorkerLifeCycle - [16:46:51] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.8.0. Attempting to upgrad\
e...
2018-10-14 16:46:51,657 [WARN ] W-9000-stderr org.pytorch.serve.wlm.WorkerLifeCycle - [16:46:51] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
  1. Messages printed to stdout:

2018-10-14 16:59:59,926 [INFO ] W-9000-stdout org.pytorch.serve.wlm.WorkerLifeCycle - preprocess time: 3.60
2018-10-14 16:59:59,926 [INFO ] W-9000-stdout org.pytorch.serve.wlm.WorkerLifeCycle - inference time: 117.31
2018-10-14 16:59:59,926 [INFO ] W-9000-stdout org.pytorch.serve.wlm.WorkerLifeCycle - postprocess time: 8.52

Modify the behavior of the logs

To modify the default logging behavior, define a log4j2.xml file. There are two ways of starting TorchServe with custom logs:

Provide with config.properties

After you define a custom log4j2.xml file, add the following to the config.properties file:

vmargs=-Dlog4j.configurationFile=file:///path/to/custom/log4j2.xml

Then start TorchServe as follows:

$ torchserve --start --ts-config /path/to/config.properties

Alternatively

$ torchserve --start --log-config /path/to/custom/log4j2.xml

Enable asynchronous logging

If your model is super lightweight and you want high throughput, consider enabling asynchronous logging. Log output might be delayed, and the most recent log might be lost if TorchServe is terminated unexpectedly. Asynchronous logging is disabled by default. To enable asynchronous logging, add following property in config.properties:

async_logging=true

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources