Shortcuts

Kubernetes-MCAD

This contains the TorchX Kubernetes_MCAD scheduler which can be used to run TorchX components on a Kubernetes cluster via the Multi-Cluster-Application-Dispatcher (MCAD).

Prerequisites

TorchX Kubernetes_MCAD scheduler depends on AppWrapper + MCAD.

Install MCAD: See deploying Multi-Cluster-Application-Dispatcher guide https://github.com/project-codeflare/multi-cluster-app-dispatcher/blob/main/doc/deploy/deployment.md

This implementation requires MCAD v1.34.1 or higher.

TorchX uses torch.distributed.run to run distributed training.

Learn more about running distributed trainers torchx.components.dist

class torchx.schedulers.kubernetes_mcad_scheduler.KubernetesMCADScheduler(session_name: str, client: Optional[ApiClient] = None, docker_client: Optional[DockerClient] = None)[source]

Bases: DockerWorkspaceMixin, Scheduler[KubernetesMCADOpts]

KubernetesMCADScheduler is a TorchX scheduling interface to Kubernetes.

Important: AppWrapper/MCAD is required to be installed on the Kubernetes cluster. TorchX requires gang scheduling for multi-replica/multi-role execution. Note that AppWrapper/MCAD supports gang scheduling among any app-wrapped jobs on Kubernetes. However, for true gang scheduling AppWrapper/MCAD needs to be used with an additional Kubernetes co-scheduler. For installation instructions see: https://github.com/project-codeflare/multi-cluster-app-dispatcher/blob/main/doc/deploy/deployment.md

This has been confirmed to work with MCAD main branch v1.34.1 or higher and OpenShift Kubernetes Client Version: 4.10.13 Server Version: 4.9.18 Kubernetes Version: v1.22.3+e790d7f

$ torchx run --scheduler kubernetes_mcad --scheduler_args namespace=default,image_repo=<your_image_repo> utils.echo --image alpine:latest --msg hello
...

The TorchX-MCAD scheduler can be used with a secondary scheduler on Kubernetes. To enable this, the user must provide the name of the coscheduler. With this feature, a PodGroup is defined for each TorchX role and the coscheduler handles secondary scheduling on the Kubernetes cluster. For additional resources, see: 1. PodGroups and Coscheduling: https://github.com/kubernetes-sigs/scheduler-plugins/tree/release-1.24/pkg/coscheduling 2. Installing Secondary schedulers: https://github.com/kubernetes-sigs/scheduler-plugins/blob/release-1.24/doc/install.md 3. PodGroup CRD: https://github.com/kubernetes-sigs/scheduler-plugins/blob/release-1.24/config/crd/bases/scheduling.sigs.k8s.io_podgroups.yaml

The MCAD scheduler supports priorities at the AppWrapper level and optionally at the pod level on clusters with PriorityClass definitions. At the AppWrapper level, higher integer values means higher priorities. Kubernetes clusters may have additional priorityClass definitions that can be applied at the pod level. While these different levels of priorities can be set independently, it is recommended to check with your Kubernetes cluster admin to see if additional guidance is in place. For more on Kubernetes PriorityClass, see: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/ .

In order to use the network option, the Kubernetes cluster must have multus installed. For multus installation instructions and how to set up a network custom network attachment definition, see: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md

Config Options

    usage:
        [namespace=NAMESPACE],[image_repo=IMAGE_REPO],[service_account=SERVICE_ACCOUNT],[priority=PRIORITY],[priority_class_name=PRIORITY_CLASS_NAME],[image_secret=IMAGE_SECRET],[coscheduler_name=COSCHEDULER_NAME],[network=NETWORK]

    optional arguments:
        namespace=NAMESPACE (str, default)
            Kubernetes namespace to schedule job in
        image_repo=IMAGE_REPO (str, None)
            The image repository to use when pushing patched images, must have push access. Ex: example.com/your/container
        service_account=SERVICE_ACCOUNT (str, None)
            The service account name to set on the pod specs
        priority=PRIORITY (int, None)
            The priority level to set on the job specs. Higher integer value means higher priority
        priority_class_name=PRIORITY_CLASS_NAME (str, None)
            Pod specific priority level. Check with your Kubernetes cluster admin if Priority classes are defined on your system
        image_secret=IMAGE_SECRET (str, None)
            The name of the Kubernetes/OpenShift secret set up for private images
        coscheduler_name=COSCHEDULER_NAME (str, None)
            Option to run TorchX-MCAD with a co-scheduler. User must provide the co-scheduler name.
        network=NETWORK (str, None)
            Name of additional pod-to-pod network beyond default Kubernetes network

Mounts

Mounting external filesystems/volumes is via the HostPath and PersistentVolumeClaim support.

  • hostPath volumes: type=bind,src=<host path>,dst=<container path>[,readonly]

  • PersistentVolumeClaim: type=volume,src=<claim>,dst=<container path>[,readonly]

  • host devices: type=device,src=/dev/foo[,dst=<container path>][,perm=rwm] If you specify a host device the job will run in privileged mode since Kubernetes doesn’t expose a way to pass –device to the underlying container runtime. Users should prefer to use device plugins.

See torchx.specs.parse_mounts() for more info.

External docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Resources / Allocation

To select a specific machine type you can add a capability to your resources with node.kubernetes.io/instance-type which will constrain the launched jobs to nodes of that instance type.

>>> from torchx import specs
>>> specs.Resource(
...     cpu=4,
...     memMB=16000,
...     gpu=2,
...     capabilities={
...         "node.kubernetes.io/instance-type": "<cloud instance type>",
...     },
... )
Resource(...)

Kubernetes may reserve some memory for the host. TorchX assumes you’re scheduling on whole hosts and thus will automatically reduce the resource request by a small amount to account for the node reserved CPU and memory. If you run into scheduling issues you may need to reduce the requested CPU and memory from the host values.

Compatibility

Feature

Scheduler Support

Fetch Logs

✔️

Distributed Jobs

✔️

Cancel Job

✔️

Describe Job

✔️

Workspaces / Patching

✔️

Mounts

✔️

Elasticity

describe(app_id: str) Optional[DescribeAppResponse][source]

Describes the specified application.

Returns:

AppDef description or None if the app does not exist.

list() List[ListAppResponse][source]

For apps launched on the scheduler, this API returns a list of ListAppResponse objects each of which have app id and its status. Note: This API is in prototype phase and is subject to change.

log_iter(app_id: str, role_name: str, k: int = 0, regex: Optional[str] = None, since: Optional[datetime] = None, until: Optional[datetime] = None, should_tail: bool = False, streams: Optional[Stream] = None) Iterable[str][source]

Returns an iterator to the log lines of the k``th replica of the ``role. The iterator ends when all qualifying log lines have been read.

If the scheduler supports time-based cursors fetching log lines for custom time ranges, then the since, until fields are honored, otherwise they are ignored. Not specifying since and until is equivalent to getting all available log lines. If the until is empty, then the iterator behaves like tail -f, following the log output until the job reaches a terminal state.

The exact definition of what constitutes a log is scheduler specific. Some schedulers may consider stderr or stdout as the log, others may read the logs from a log file.

Behaviors and assumptions:

  1. Produces an undefined-behavior if called on an app that does not exist The caller should check that the app exists using exists(app_id) prior to calling this method.

  2. Is not stateful, calling this method twice with same parameters returns a new iterator. Prior iteration progress is lost.

  3. Does not always support log-tailing. Not all schedulers support live log iteration (e.g. tailing logs while the app is running). Refer to the specific scheduler’s documentation for the iterator’s behavior.

3.1 If the scheduler supports log-tailing, it should be controlled

by should_tail parameter.

  1. Does not guarantee log retention. It is possible that by the time this method is called, the underlying scheduler may have purged the log records for this application. If so this method raises an arbitrary exception.

  2. If should_tail is True, the method only raises a StopIteration exception when the accessible log lines have been fully exhausted and the app has reached a final state. For instance, if the app gets stuck and does not produce any log lines, then the iterator blocks until the app eventually gets killed (either via timeout or manually) at which point it raises a StopIteration.

    If should_tail is False, the method raises StopIteration when there are no more logs.

  3. Need not be supported by all schedulers.

  4. Some schedulers may support line cursors by supporting __getitem__ (e.g. iter[50] seeks to the 50th log line).

  5. Whitespace is preserved, each new line should include \n. To

    support interactive progress bars the returned lines don’t need to include \n but should then be printed without a newline to correctly handle \r carriage returns.

Parameters:

streams – The IO output streams to select. One of: combined, stdout, stderr. If the selected stream isn’t supported by the scheduler it will throw an ValueError.

Returns:

An Iterator over log lines of the specified role replica

Raises:

NotImplementedError – if the scheduler does not support log iteration

run_opts() runopts[source]

Returns the run configuration options expected by the scheduler. Basically a --help for the run API.

schedule(dryrun_info: AppDryRunInfo[KubernetesMCADJob]) str[source]

Same as submit except that it takes an AppDryRunInfo. Implementers are encouraged to implement this method rather than directly implementing submit since submit can be trivially implemented by:

dryrun_info = self.submit_dryrun(app, cfg)
return schedule(dryrun_info)
class torchx.schedulers.kubernetes_mcad_scheduler.KubernetesMCADJob(images_to_push: Dict[str, Tuple[str, str]], resource: Dict[str, object])[source]

Reference

torchx.schedulers.kubernetes_mcad_scheduler.create_scheduler(session_name: str, client: Optional[ApiClient] = None, docker_client: Optional[DockerClient] = None, **kwargs: Any) KubernetesMCADScheduler[source]
torchx.schedulers.kubernetes_mcad_scheduler.app_to_resource(app: AppDef, namespace: str, service_account: Optional[str], image_secret: Optional[str], coscheduler_name: Optional[str], priority_class_name: Optional[str], network: Optional[str], priority: Optional[int] = None) Dict[str, Any][source]

app_to_resource creates a AppWrapper/MCAD Kubernetes resource definition from the provided AppDef. The resource definition can be used to launch the app on Kubernetes.

MCAD supports retries at the APPLICATION level. In the case of multiple TorchX Roles, the AppWrapper maximum number of retries count is set to the minimum of the max_retries of the roles.

torchx.schedulers.kubernetes_mcad_scheduler.mcad_svc(app: AppDef, svc_name: str, namespace: str, service_port: str) V1Service[source]
torchx.schedulers.kubernetes_mcad_scheduler.get_appwrapper_status(app: Dict[str, str]) AppState[source]
torchx.schedulers.kubernetes_mcad_scheduler.get_port_for_service(app: AppDef) str[source]
torchx.schedulers.kubernetes_mcad_scheduler.get_role_information(generic_items: Iterable[Dict[str, Any]]) Dict[str, Any][source]
torchx.schedulers.kubernetes_mcad_scheduler.get_tasks_status_description(status: Dict[str, str]) Dict[str, int][source]
torchx.schedulers.kubernetes_mcad_scheduler.pod_labels(app: AppDef, role_idx: int, role: Role, replica_id: int, coscheduler_name: Optional[str], app_id: str) Dict[str, str][source]
torchx.schedulers.kubernetes_mcad_scheduler.role_to_pod(name: str, unique_app_id: str, namespace: str, role: Role, service_account: Optional[str], image_secret: Optional[str], coscheduler_name: Optional[str], priority_class_name: Optional[str], network: Optional[str]) V1Pod[source]
torchx.schedulers.kubernetes_mcad_scheduler.sanitize_for_serialization(obj: object) object[source]

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources