Shortcuts

ts package

Subpackages

Submodules

ts.arg_parser module

This module parses the arguments given through the torchserve command-line. This is used by model-server at runtime.

class ts.arg_parser.ArgParser[source]

Bases: object

Argument parser for torchserve and torchserve-export commands TODO : Add readme url

static extract_args(args=None)[source]
static model_service_worker_args()[source]

ArgParser for backend worker. Takes the socket name and socket type. :return:

static ts_parser()[source]

Argument parser for torchserve start service

ts.context module

Context object of incoming request

class ts.context.Context(model_name, model_dir, manifest, batch_size, gpu, mms_version, limit_max_image_pixels=True, metrics=None)[source]

Bases: object

Context stores model relevant worker information Some fixed during load times and some

get_all_request_header(idx)[source]
get_request_header(idx, key)[source]
get_request_id(idx=0)[source]
get_response_content_type(idx)[source]
get_response_headers(idx)[source]
get_response_status(idx)[source]
property metrics
property request_processor
set_all_response_status(code=200, phrase='')[source]

Set the status code of individual requests :param phrase: :param code: :return:

set_response_content_type(idx, value)[source]
set_response_header(idx, key, value)[source]
set_response_status(code=200, phrase='', idx=0)[source]

Set the status code of individual requests :param phrase: :param idx: The index data in the list(data) that is sent to the handle() method :param code: :return:

property system_properties
class ts.context.RequestProcessor(request_header)[source]

Bases: object

Request processor

add_response_property(key, value)[source]
get_request_properties()[source]
get_request_property(key)[source]
get_response_header(key)[source]
get_response_headers()[source]
get_response_status_code()[source]
get_response_status_phrase()[source]
report_status(code, reason_phrase=None)[source]

ts.model_loader module

ts.model_server module

File to define the entry point to Model Server

ts.model_server.load_properties(file_path)[source]

Read properties file into map.

ts.model_server.start()[source]

This is the entry point for model server :return:

ts.model_service_worker module

ts.service module

CustomService class definitions

class ts.service.Service(model_name, model_dir, manifest, entry_point, gpu, batch_size, limit_max_image_pixels=True, metrics_cache=None)[source]

Bases: object

Wrapper for custom entry_point

property context
predict(batch)[source]
PREDICT COMMAND = {

“command”: “predict”, “batch”: [ REQUEST_INPUT ]

} :param batch: list of request :return:

static retrieve_data_for_inference(batch)[source]
REQUEST_INPUT = {

“requestId” : “111-222-3333”, “parameters” : [ PARAMETER ]

}

PARAMETER = {

“name” : parameter name “contentType”: “http-content-types”, “value”: “val1”

}

Parameters

batch

Returns

ts.service.emit_metrics(metrics)[source]

Emit the metrics in the provided Dictionary

Parameters
  • metrics (Dictionary) –

  • dictionary of all metrics (A) –

  • key is metric_name (when) –

  • is a metric object (value) –

ts.version module

This is the current version of TorchServe

Module contents

This module does the following: a. Starts model-server. b. Creates end-points based on the configured models. c. Exposes standard “ping” and “api-description” endpoints. d. Waits for servicing inference requests.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources