Airflow¶
For pipelines that support Python based execution you can directly use the TorchX API. TorchX is designed to be easily integrated in to other applications via the programmatic API. No special Airflow integrations are needed.
With TorchX, you can use Airflow for the pipeline orchestration and run your PyTorch application (i.e. distributed training) on a remote GPU cluster.
[1]:
import datetime
import pendulum
from airflow.utils.state import DagRunState, TaskInstanceState
from airflow.utils.types import DagRunType
from airflow.models.dag import DAG
from airflow.decorators import task
DATA_INTERVAL_START = pendulum.datetime(2021, 9, 13, tz="UTC")
DATA_INTERVAL_END = DATA_INTERVAL_START + datetime.timedelta(days=1)
To launch a TorchX job from Airflow you can create a Airflow Python task to import the runner, launch the job and wait for it to complete. If you’re running on a remote cluster you may need to use the virtualenv task to install the torchx
package.
[2]:
@task(task_id=f'hello_torchx')
def run_torchx(message):
"""This is a function that will run within the DAG execution"""
from torchx.runner import get_runner
with get_runner() as runner:
# Run the utils.sh component on the local_cwd scheduler.
app_id = runner.run_component(
"utils.sh",
["echo", message],
scheduler="local_cwd",
)
# Wait for the the job to complete
status = runner.wait(app_id, wait_interval=1)
# Raise_for_status will raise an exception if the job didn't succeed
status.raise_for_status()
# Finally we can print all of the log lines from the TorchX job so it
# will show up in the workflow logs.
for line in runner.log_lines(app_id, "sh", k=0):
print(line, end="")
Once we have the task defined we can put it into a Airflow DAG and run it like normal.
[3]:
from torchx.schedulers.ids import make_unique
with DAG(
dag_id=make_unique('example_python_operator'),
schedule_interval=None,
start_date=DATA_INTERVAL_START,
catchup=False,
tags=['example'],
) as dag:
run_job = run_torchx("Hello, TorchX!")
dagrun = dag.create_dagrun(
state=DagRunState.RUNNING,
execution_date=DATA_INTERVAL_START,
data_interval=(DATA_INTERVAL_START, DATA_INTERVAL_END),
start_date=DATA_INTERVAL_END,
run_type=DagRunType.MANUAL,
)
ti = dagrun.get_task_instance(task_id="hello_torchx")
ti.task = dag.get_task(task_id="hello_torchx")
ti.run(ignore_ti_state=True)
assert ti.state == TaskInstanceState.SUCCESS
/tmp/ipykernel_3958/454499020.py:3 RemovedInAirflow3Warning: Param `schedule_interval` is deprecated and will be removed in a future release. Please use `schedule` instead.
[2024-10-21T22:59:27.113+0000] {taskinstance.py:2612} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: example_python_operator-mrpd1cqf5nqklc.hello_torchx manual__2021-09-13T00:00:00+00:00 [None]>
[2024-10-21T22:59:27.118+0000] {taskinstance.py:2612} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: example_python_operator-mrpd1cqf5nqklc.hello_torchx manual__2021-09-13T00:00:00+00:00 [None]>
[2024-10-21T22:59:27.119+0000] {taskinstance.py:2865} INFO - Starting attempt 0 of 1
[2024-10-21T22:59:27.120+0000] {taskinstance.py:2946} WARNING - cannot record queued_duration for task hello_torchx because previous state change time has not been saved
[2024-10-21T22:59:27.131+0000] {taskinstance.py:2888} INFO - Executing <Task(_PythonDecoratedOperator): hello_torchx> on 2021-09-13 00:00:00+00:00
[2024-10-21T22:59:27.658+0000] {taskinstance.py:3131} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='airflow' AIRFLOW_CTX_DAG_ID='example_python_operator-mrpd1cqf5nqklc' AIRFLOW_CTX_TASK_ID='hello_torchx' AIRFLOW_CTX_EXECUTION_DATE='2021-09-13T00:00:00+00:00' AIRFLOW_CTX_DAG_RUN_ID='manual__2021-09-13T00:00:00+00:00'
Task instance is in running state
Previous state of the Task instance: queued
Current task name:hello_torchx state:running start_date:2024-10-21 22:59:27.114457+00:00
Dag name:example_python_operator-mrpd1cqf5nqklc and current dag run status:running
[2024-10-21T22:59:27.662+0000] {taskinstance.py:731} INFO - ::endgroup::
[2024-10-21T22:59:28.364+0000] {api.py:74} INFO - Tracker configurations: {}
[2024-10-21T22:59:28.368+0000] {local_scheduler.py:771} INFO - Log directory not set in scheduler cfg. Creating a temporary log dir that will be deleted on exit. To preserve log directory set the `log_dir` cfg option
[2024-10-21T22:59:28.369+0000] {local_scheduler.py:777} INFO - Log directory is: /tmp/torchx__ru2jvf8
Hello, TorchX!
[2024-10-21T22:59:28.474+0000] {python.py:240} INFO - Done. Returned value was: None
[2024-10-21T22:59:28.479+0000] {taskinstance.py:340} INFO - ::group::Post task execution logs
[2024-10-21T22:59:28.480+0000] {taskinstance.py:352} INFO - Marking task as SUCCESS. dag_id=example_python_operator-mrpd1cqf5nqklc, task_id=hello_torchx, run_id=manual__2021-09-13T00:00:00+00:00, execution_date=20210913T000000, start_date=20241021T225927, end_date=20241021T225928
Task instance in success state
Previous state of the Task instance: running
Dag name:example_python_operator-mrpd1cqf5nqklc queued_at:None
Task hostname:runner.tqa01qzaddpexb5ejpwg10ckge.bx.internal.cloudapp.net operator:_PythonDecoratedOperator
If all goes well you should see Hello, TorchX!
printed above.
Next Steps¶
Checkout the runner API documentation to learn more about programmatic usage of TorchX
Browse through the collection of builtin components which can be used in your Airflow pipeline