Shortcuts

Workflow Inference API

Workflow Inference API is listening on port 8080 and only accessible from localhost by default. To change the default setting, see TorchServe Configuration.

The TorchServe server supports the following APIs:

Predictions API

To get predictions from a workflow, make a REST call to /wfpredict/{workflow_name}:

POST /wfpredict/{workflow_name}

curl Example

curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/kitten_small.jpg

curl http://localhost:8080/wfpredict/myworkflow -T kitten_small.jpg

The result is JSON object returning the response bytes from the leaf node of the workflow DAG.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources