Shortcuts

Local

This contains the TorchX local scheduler which can be used to run TorchX components locally via subprocesses.

class torchx.schedulers.local_scheduler.LocalScheduler(session_name: str, image_provider_class: Callable[[Mapping[str, Optional[Union[str, int, float, bool, List[str]]]]], torchx.schedulers.local_scheduler.ImageProvider], cache_size: int = 100, extra_paths: Optional[List[str]] = None)[source]

Schedules on localhost. Containers are modeled as processes and certain properties of the container that are either not relevant or that cannot be enforced for localhost runs are ignored. Properties that are ignored:

  1. Resource requirements

  2. Resource limit enforcements

  3. Retry policies

  4. Retry counts (no retries supported)

  5. Deployment preferences

Scheduler support orphan processes cleanup on receiving SIGTERM or SIGINT. The scheduler will terminate the spawned processes.

This is exposed via the scheduler local_cwd.

  • local_cwd runs the provided app relative to the current working directory and ignores the images field for faster iteration and testing purposes.

Note

The orphan cleanup only works if LocalScheduler is instantiated from the main thread.

Note

Use this scheduler sparingly since an application that runs successfully on a session backed by this scheduler may not work on an actual production cluster using a different scheduler.

Feature

Scheduler Support

Fetch Logs

✔️

Distributed Jobs

LocalScheduler supports multiple replicas but all replicas will execute on the local host.

Cancel Job

✔️

Describe Job

✔️

close()None[source]

Only for schedulers that have local state! Closes the scheduler freeing any allocated resources. Once closed, the scheduler object is deemed to no longer be valid and any method called on the object results in undefined behavior.

This method should not raise exceptions and is allowed to be called multiple times on the same object.

Note

Override only for scheduler implementations that have local state (torchx/schedulers/local_scheduler.py). Schedulers simply wrapping a remote scheduler’s client need not implement this method.

describe(app_id: str)Optional[torchx.schedulers.api.DescribeAppResponse][source]

Describes the specified application.

Returns

AppDef description or None if the app does not exist.

log_iter(app_id: str, role_name: str, k: int = 0, regex: Optional[str] = None, since: Optional[datetime.datetime] = None, until: Optional[datetime.datetime] = None, should_tail: bool = False, streams: Optional[torchx.schedulers.api.Stream] = None)Iterable[str][source]

Returns an iterator to the log lines of the k``th replica of the ``role. The iterator ends end all qualifying log lines have been read.

If the scheduler supports time-based cursors fetching log lines for custom time ranges, then the since, until fields are honored, otherwise they are ignored. Not specifying since and until is equivalent to getting all available log lines. If the until is empty, then the iterator behaves like tail -f, following the log output until the job reaches a terminal state.

The exact definition of what constitutes a log is scheduler specific. Some schedulers may consider stderr or stdout as the log, others may read the logs from a log file.

Behaviors and assumptions:

  1. Produces an undefined-behavior if called on an app that does not exist The caller should check that the app exists using exists(app_id) prior to calling this method.

  2. Is not stateful, calling this method twice with same parameters returns a new iterator. Prior iteration progress is lost.

  3. Does not always support log-tailing. Not all schedulers support live log iteration (e.g. tailing logs while the app is running). Refer to the specific scheduler’s documentation for the iterator’s behavior.

3.1 If the scheduler supports log-tailing, it should be controlled

by``should_tail`` parameter.

  1. Does not guarantee log retention. It is possible that by the time this method is called, the underlying scheduler may have purged the log records for this application. If so this method raises an arbitrary exception.

  2. If should_tail is True, the method only raises a StopIteration exception when the accessible log lines have been fully exhausted and the app has reached a final state. For instance, if the app gets stuck and does not produce any log lines, then the iterator blocks until the app eventually gets killed (either via timeout or manually) at which point it raises a StopIteration.

    If should_tail is False, the method raises StopIteration when there are no more logs.

  3. Need not be supported by all schedulers.

  4. Some schedulers may support line cursors by supporting __getitem__ (e.g. iter[50] seeks to the 50th log line).

Parameters

streams – The IO output streams to select. One of: combined, stdout, stderr. If the selected stream isn’t supported by the scheduler it will throw an ValueError.

Returns

An Iterator over log lines of the specified role replica

Raises

NotImplementedError – if the scheduler does not support log iteration

run_opts()torchx.specs.api.runopts[source]

Returns the run configuration options expected by the scheduler. Basically a --help for the run API.

schedule(dryrun_info: torchx.specs.api.AppDryRunInfo[torchx.schedulers.local_scheduler.PopenRequest])str[source]

Same as submit except that it takes an AppDryRunInfo. Implementors are encouraged to implement this method rather than directly implementing submit since submit can be trivially implemented by:

dryrun_info = self.submit_dryrun(app, cfg)
return schedule(dryrun_info)
class torchx.schedulers.docker_scheduler.DockerScheduler(session_name: str)[source]

DockerScheduler is a TorchX scheduling interface to Docker.

This is exposed via the scheduler local_docker.

This scheduler runs the provided app via the local docker runtime using the specified images in the AppDef. Docker must be installed and running. This provides the closest environment to schedulers that natively use Docker such as Kubernetes.

Note

docker doesn’t provide gang scheduling mechanisms. If one replica in a job fails, only that replica will be restarted.

Feature

Scheduler Support

Fetch Logs

✔️

Distributed Jobs

✔️

Cancel Job

✔️

Describe Job

Partial support. DockerScheduler will return job and replica status but does not provide the complete original AppSpec.

build_workspace_image(img: str, workspace: str)str[source]

build_workspace_image creates a new image with the files in workspace overlaid on top of it.

Parameters
  • img – a Docker image to use as a base

  • workspace – a fsspec path to a directory with contents to be overlaid

Returns

The new Docker image ID.

describe(app_id: str)Optional[torchx.schedulers.api.DescribeAppResponse][source]

Describes the specified application.

Returns

AppDef description or None if the app does not exist.

log_iter(app_id: str, role_name: str, k: int = 0, regex: Optional[str] = None, since: Optional[datetime.datetime] = None, until: Optional[datetime.datetime] = None, should_tail: bool = False, streams: Optional[torchx.schedulers.api.Stream] = None)Iterable[str][source]

Returns an iterator to the log lines of the k``th replica of the ``role. The iterator ends end all qualifying log lines have been read.

If the scheduler supports time-based cursors fetching log lines for custom time ranges, then the since, until fields are honored, otherwise they are ignored. Not specifying since and until is equivalent to getting all available log lines. If the until is empty, then the iterator behaves like tail -f, following the log output until the job reaches a terminal state.

The exact definition of what constitutes a log is scheduler specific. Some schedulers may consider stderr or stdout as the log, others may read the logs from a log file.

Behaviors and assumptions:

  1. Produces an undefined-behavior if called on an app that does not exist The caller should check that the app exists using exists(app_id) prior to calling this method.

  2. Is not stateful, calling this method twice with same parameters returns a new iterator. Prior iteration progress is lost.

  3. Does not always support log-tailing. Not all schedulers support live log iteration (e.g. tailing logs while the app is running). Refer to the specific scheduler’s documentation for the iterator’s behavior.

3.1 If the scheduler supports log-tailing, it should be controlled

by``should_tail`` parameter.

  1. Does not guarantee log retention. It is possible that by the time this method is called, the underlying scheduler may have purged the log records for this application. If so this method raises an arbitrary exception.

  2. If should_tail is True, the method only raises a StopIteration exception when the accessible log lines have been fully exhausted and the app has reached a final state. For instance, if the app gets stuck and does not produce any log lines, then the iterator blocks until the app eventually gets killed (either via timeout or manually) at which point it raises a StopIteration.

    If should_tail is False, the method raises StopIteration when there are no more logs.

  3. Need not be supported by all schedulers.

  4. Some schedulers may support line cursors by supporting __getitem__ (e.g. iter[50] seeks to the 50th log line).

Parameters

streams – The IO output streams to select. One of: combined, stdout, stderr. If the selected stream isn’t supported by the scheduler it will throw an ValueError.

Returns

An Iterator over log lines of the specified role replica

Raises

NotImplementedError – if the scheduler does not support log iteration

run_opts()torchx.specs.api.runopts[source]

Returns the run configuration options expected by the scheduler. Basically a --help for the run API.

schedule(dryrun_info: torchx.specs.api.AppDryRunInfo[torchx.schedulers.docker_scheduler.DockerJob])str[source]

Same as submit except that it takes an AppDryRunInfo. Implementors are encouraged to implement this method rather than directly implementing submit since submit can be trivially implemented by:

dryrun_info = self.submit_dryrun(app, cfg)
return schedule(dryrun_info)

Image Providers

class torchx.schedulers.local_scheduler.ImageProvider[source]

Manages downloading and setting up an on localhost. This is only needed for LocalhostScheduler since typically real schedulers will do this on-behalf of the user.

abstract fetch(image: str)str[source]

Pulls the given image and returns a path to the pulled image on the local host or empty string if no op

fetch_role(role: torchx.specs.api.Role)str[source]

Identical to fetch(image) in that it fetches the role’s image and returns the path to the image root, except that it allows the role to be updated by this provider. Useful when additional environment variables need to be set on the role to comply with the image provider’s way of fetching and managing images on localhost. By default this method simply delegates to fetch(role.image). Override if necessary.

get_cwd(image: str)Optional[str][source]

Returns the absolute path of the mounted img directory. Used as a working directory for starting child processes.

get_entrypoint(img_root: str, role: torchx.specs.api.Role)str[source]

Returns the location of the entrypoint.

get_replica_param(img_root: str, role: torchx.specs.api.Role, stdout: Optional[str] = None, stderr: Optional[str] = None, combined: Optional[str] = None)torchx.schedulers.local_scheduler.ReplicaParam[source]

Given the role replica’s specs returns ReplicaParam holder which hold the arguments to eventually pass to subprocess.Popen to actually invoke and run each role’s replica. The img_root is expected to be the return value of self.fetch(role.image). Since the role’s image need only be fetched once (not for each replica) it is expected that the caller call the fetch method once per role and call this method for each role.num_replicas.

class torchx.schedulers.local_scheduler.CWDImageProvider(cfg: Mapping[str, Optional[Union[str, int, float, bool, List[str]]]])[source]

Similar to LocalDirectoryImageProvider however it ignores the image name and uses the current working directory as the image path.

Example:

  1. fetch(Image(name="/tmp/foobar")) returns os.getcwd()

  2. fetch(Image(name="foobar:latest")) returns os.getcwd()

fetch(image: str)str[source]

Pulls the given image and returns a path to the pulled image on the local host or empty string if no op

get_cwd(image: str)Optional[str][source]

Returns the absolute path of the mounted img directory. Used as a working directory for starting child processes.

get_entrypoint(img_root: str, role: torchx.specs.api.Role)str[source]

Returns the location of the entrypoint.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources