{ "cells": [ { "cell_type": "markdown", "id": "a474c143-05c4-43b6-b12c-17b592d07a6a", "metadata": { "id": "a474c143-05c4-43b6-b12c-17b592d07a6a" }, "source": [ "# Per-sample-gradients\n", "\n", "\n", " \"Open\n", "\n", "\n", "## What is it?\n", "\n", "Per-sample-gradient computation is computing the gradient for each and every\n", "sample in a batch of data. It is a useful quantity in differential privacy, meta-learning,\n", "and optimization research.\n" ] }, { "cell_type": "code", "source": [ "import torch\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "from functools import partial\n", "\n", "torch.manual_seed(0);" ], "metadata": { "id": "Gb-yt4VKUUuc" }, "execution_count": null, "outputs": [], "id": "Gb-yt4VKUUuc" }, { "cell_type": "code", "source": [ "# Here's a simple CNN and loss function:\n", "\n", "class SimpleCNN(nn.Module):\n", " def __init__(self):\n", " super(SimpleCNN, self).__init__()\n", " self.conv1 = nn.Conv2d(1, 32, 3, 1)\n", " self.conv2 = nn.Conv2d(32, 64, 3, 1)\n", " self.fc1 = nn.Linear(9216, 128)\n", " self.fc2 = nn.Linear(128, 10)\n", "\n", " def forward(self, x):\n", " x = self.conv1(x)\n", " x = F.relu(x)\n", " x = self.conv2(x)\n", " x = F.relu(x)\n", " x = F.max_pool2d(x, 2)\n", " x = torch.flatten(x, 1)\n", " x = self.fc1(x)\n", " x = F.relu(x)\n", " x = self.fc2(x)\n", " output = F.log_softmax(x, dim=1)\n", " output = x\n", " return output\n", "\n", "def loss_fn(predictions, targets):\n", " return F.nll_loss(predictions, targets)" ], "metadata": { "id": "tf-HKHjUUbyY" }, "execution_count": null, "outputs": [], "id": "tf-HKHjUUbyY" }, { "cell_type": "markdown", "source": [ "Let’s generate a batch of dummy data and pretend that we’re working with an MNIST dataset. \n", "\n", "The dummy images are 28 by 28 and we use a minibatch of size 64.\n", "\n" ], "metadata": { "id": "VEDPe-EoU5Fa" }, "id": "VEDPe-EoU5Fa" }, { "cell_type": "code", "source": [ "device = 'cuda'\n", "\n", "num_models = 10\n", "batch_size = 64\n", "data = torch.randn(batch_size, 1, 28, 28, device=device)\n", "\n", "targets = torch.randint(10, (64,), device=device)" ], "metadata": { "id": "WB2Qe3AHUvPN" }, "execution_count": null, "outputs": [], "id": "WB2Qe3AHUvPN" }, { "cell_type": "markdown", "source": [ "In regular model training, one would forward the minibatch through the model, and then call .backward() to compute gradients. This would generate an 'average' gradient of the entire mini-batch:\n", "\n" ], "metadata": { "id": "GOGJ-OUxVcT5" }, "id": "GOGJ-OUxVcT5" }, { "cell_type": "code", "source": [ "model = SimpleCNN().to(device=device)\n", "predictions = model(data) # move the entire mini-batch through the model\n", "\n", "loss = loss_fn(predictions, targets)\n", "loss.backward() # back propogate the 'average' gradient of this mini-batch" ], "metadata": { "id": "WYjMx8QTUvRu" }, "execution_count": null, "outputs": [], "id": "WYjMx8QTUvRu" }, { "cell_type": "markdown", "source": [ "In contrast to the above approach, per-sample-gradient computation is equivalent to: \n", "- for each individual sample of the data, perform a forward and a backward pass to get an individual (per-sample) gradient.\n", "\n" ], "metadata": { "id": "HNw4_IVzU5Pz" }, "id": "HNw4_IVzU5Pz" }, { "cell_type": "code", "source": [ "def compute_grad(sample, target):\n", " \n", " sample = sample.unsqueeze(0) # prepend batch dimension for processing\n", " target = target.unsqueeze(0)\n", "\n", " prediction = model(sample)\n", " loss = loss_fn(prediction, target)\n", "\n", " return torch.autograd.grad(loss, list(model.parameters()))\n", "\n", "\n", "def compute_sample_grads(data, targets):\n", " \"\"\" manually process each sample with per sample gradient \"\"\"\n", " sample_grads = [compute_grad(data[i], targets[i]) for i in range(batch_size)]\n", " sample_grads = zip(*sample_grads)\n", " sample_grads = [torch.stack(shards) for shards in sample_grads]\n", " return sample_grads\n", "\n", "per_sample_grads = compute_sample_grads(data, targets)" ], "metadata": { "id": "vUsb3VfexJrY" }, "execution_count": null, "outputs": [], "id": "vUsb3VfexJrY" }, { "cell_type": "markdown", "source": [ "`sample_grads[0]` is the per-sample-grad for model.conv1.weight. `model.conv1.weight.shape` is `[32, 1, 3, 3]`; notice how there is one gradient, per sample, in the batch for a total of 64.\n", "\n", "\n", "\n" ], "metadata": { "id": "aNkX6lFIxzcm" }, "id": "aNkX6lFIxzcm" }, { "cell_type": "code", "source": [ "print(per_sample_grads[0].shape)" ], "metadata": { "id": "C3a9_clvyPho", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "407abc1a-846f-4e50-83bc-c90719a26073" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "torch.Size([64, 32, 1, 3, 3])\n" ] } ], "id": "C3a9_clvyPho" }, { "cell_type": "markdown", "source": [ "## Per-sample-grads, *the efficient way*, using functorch\n", "\n", "\n" ], "metadata": { "id": "mFJDWMM9yaYZ" }, "id": "mFJDWMM9yaYZ" }, { "cell_type": "markdown", "source": [ "We can compute per-sample-gradients efficiently by using function transforms. \n", "\n", "First, let’s create a stateless functional version of `model` by using `functorch.make_functional_with_buffers`. \n", "\n", "This will separate state (the parameters) from the model and turn the model into a pure function:\n", "\n" ], "metadata": { "id": "tlkmyQyfY6XU" }, "id": "tlkmyQyfY6XU" }, { "cell_type": "code", "source": [ "from functorch import make_functional_with_buffers, vmap, grad\n", "\n", "fmodel, params, buffers = make_functional_with_buffers(model)" ], "metadata": { "id": "WiSMupvCyecd" }, "execution_count": null, "outputs": [], "id": "WiSMupvCyecd" }, { "cell_type": "markdown", "source": [ "Let's review the changes - first, the model has become the stateless FunctionalModuleWithBuffers:" ], "metadata": { "id": "wMsbppPNZklo" }, "id": "wMsbppPNZklo" }, { "cell_type": "code", "source": [ "fmodel" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Xj0cZOJMZbbB", "outputId": "2e87dfde-3af2-4e1f-cd91-5c232446fb53" }, "execution_count": null, "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "FunctionalModuleWithBuffers(\n", " (stateless_model): SimpleCNN(\n", " (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))\n", " (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))\n", " (fc1): Linear(in_features=9216, out_features=128, bias=True)\n", " (fc2): Linear(in_features=128, out_features=10, bias=True)\n", " )\n", ")" ] }, "metadata": {}, "execution_count": 15 } ], "id": "Xj0cZOJMZbbB" }, { "cell_type": "markdown", "source": [ "And the model parameters now exist independently of the model, stored as a tuple:" ], "metadata": { "id": "zv4_YYPxZvvg" }, "id": "zv4_YYPxZvvg" }, { "cell_type": "code", "source": [ "for x in params:\n", " print(f\"{x.shape}\")\n", "\n", "print(f\"\\n{type(params)}\")" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "tH0TAZhBZ3bS", "outputId": "97c4401f-cccb-43f6-b071-c85a18fc439b" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "torch.Size([32, 1, 3, 3])\n", "torch.Size([32])\n", "torch.Size([64, 32, 3, 3])\n", "torch.Size([64])\n", "torch.Size([128, 9216])\n", "torch.Size([128])\n", "torch.Size([10, 128])\n", "torch.Size([10])\n", "\n", "\n" ] } ], "id": "tH0TAZhBZ3bS" }, { "cell_type": "markdown", "source": [ "Next, let’s define a function to compute the loss of the model given a single input rather than a batch of inputs. It is important that this function accepts the parameters, the input, and the target, because we will be transforming over them. \n", "\n", "Note - because the model was originally written to handle batches, we’ll use `torch.unsqueeze` to add a batch dimension.\n", "\n" ], "metadata": { "id": "cTgIIZ9Wyih8" }, "id": "cTgIIZ9Wyih8" }, { "cell_type": "code", "source": [ "def compute_loss_stateless_model (params, buffers, sample, target):\n", " batch = sample.unsqueeze(0)\n", " targets = target.unsqueeze(0)\n", "\n", " predictions = fmodel(params, buffers, batch) \n", " loss = loss_fn(predictions, targets)\n", " return loss" ], "metadata": { "id": "ItURFU3M-p98" }, "execution_count": null, "outputs": [], "id": "ItURFU3M-p98" }, { "cell_type": "markdown", "source": [ "Now, let’s use functorch's `grad` to create a new function that computes the gradient with respect to the first argument of `compute_loss` (i.e. the params)." ], "metadata": { "id": "Qo3sbDK2i_bH" }, "id": "Qo3sbDK2i_bH" }, { "cell_type": "code", "source": [ "ft_compute_grad = grad(compute_loss_stateless_model)" ], "metadata": { "id": "sqRp_Sxni-Xm" }, "execution_count": null, "outputs": [], "id": "sqRp_Sxni-Xm" }, { "cell_type": "markdown", "source": [ "The `ft_compute_grad` function computes the gradient for a single (sample, target) pair. We can use vmap to get it to compute the gradient over an entire batch of samples and targets. Note that `in_dims=(None, None, 0, 0)` because we wish to map `ft_compute_grad` over the 0th dimension of the data and targets, and use the same params and buffers for each.\n", "\n" ], "metadata": { "id": "2pG3Ofqjjc8O" }, "id": "2pG3Ofqjjc8O" }, { "cell_type": "code", "source": [ "ft_compute_sample_grad = vmap(ft_compute_grad, in_dims=(None, None, 0, 0))" ], "metadata": { "id": "62ecNMO6inqX" }, "execution_count": null, "outputs": [], "id": "62ecNMO6inqX" }, { "cell_type": "markdown", "source": [ "Finally, let’s used our transformed function to compute per-sample-gradients:\n", "\n" ], "metadata": { "id": "_alXdQ3QkETu" }, "id": "_alXdQ3QkETu" }, { "cell_type": "code", "source": [ "ft_per_sample_grads = ft_compute_sample_grad(params, buffers, data, targets)\n", "\n", "# we can double check that the results using functorch grad and vmap match the results of hand processing each one individually:\n", "for per_sample_grad, ft_per_sample_grad in zip(per_sample_grads, ft_per_sample_grads):\n", " assert torch.allclose(per_sample_grad, ft_per_sample_grad, atol=3e-3, rtol=1e-5)" ], "metadata": { "id": "1gehVA1c-BHd" }, "execution_count": null, "outputs": [], "id": "1gehVA1c-BHd" }, { "cell_type": "markdown", "source": [ "A quick note: there are limitations around what types of functions can be transformed by vmap. The best functions to transform are ones that are pure functions: a function where the outputs are only determined by the inputs, and that have no side effects (e.g. mutation). vmap is unable to handle mutation of arbitrary Python data structures, but it is able to handle many in-place PyTorch operations.\n", "\n", "\n", "\n" ], "metadata": { "id": "BEZaNt1d_bc1" }, "id": "BEZaNt1d_bc1" }, { "cell_type": "markdown", "source": [ "## Performance comparison" ], "metadata": { "id": "BASP151Iml7B" }, "id": "BASP151Iml7B" }, { "cell_type": "markdown", "source": [ "Curious about how the performance of vmap compares?\n", "\n", "Currently the best results are obtained on newer GPU's such as the A100 (Ampere) where we've seen up to 25x speedups on this example, but here are some results done in Colab:" ], "metadata": { "id": "jr1xNpV4nJ7u" }, "id": "jr1xNpV4nJ7u" }, { "cell_type": "code", "source": [ "def get_perf(first, first_descriptor, second, second_descriptor):\n", " \"\"\" takes torch.benchmark objects and compares delta of second vs first. \"\"\"\n", " second_res = second.times[0]\n", " first_res = first.times[0]\n", "\n", " gain = (first_res-second_res)/first_res\n", " if gain < 0: gain *=-1 \n", " final_gain = gain*100\n", "\n", " print(f\" Performance delta: {final_gain:.4f} percent improvement with {first_descriptor} \")" ], "metadata": { "id": "GnAnMkYmoc-j" }, "execution_count": null, "outputs": [], "id": "GnAnMkYmoc-j" }, { "cell_type": "code", "source": [ "from torch.utils.benchmark import Timer\n", "\n", "without_vmap = Timer( stmt=\"compute_sample_grads(data, targets)\", globals=globals())\n", "with_vmap = Timer(stmt=\"ft_compute_sample_grad(params, buffers, data, targets)\",globals=globals())\n", "no_vmap_timing = without_vmap.timeit(100)\n", "with_vmap_timing = with_vmap.timeit(100)\n", "\n", "print(f'Per-sample-grads without vmap {no_vmap_timing}')\n", "print(f'Per-sample-grads with vmap {with_vmap_timing}')" ], "metadata": { "id": "Zfnn2C2g-6Fb", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "922f3901-773f-446b-b562-88e78f49036c" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Per-sample-grads without vmap \n", "compute_sample_grads(data, targets)\n", " 79.86 ms\n", " 1 measurement, 100 runs , 1 thread\n", "Per-sample-grads with vmap \n", "ft_compute_sample_grad(params, buffers, data, targets)\n", " 12.93 ms\n", " 1 measurement, 100 runs , 1 thread\n" ] } ], "id": "Zfnn2C2g-6Fb" }, { "cell_type": "code", "source": [ "get_perf(with_vmap_timing, \"vmap\", no_vmap_timing,\"no vmap\" )" ], "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "NV9R3LZQoavl", "outputId": "e11e8be9-287d-4e60-e517-e08f8d6909bd" }, "execution_count": null, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ " Performance delta: 517.5791 percent improvement with vmap \n" ] } ], "id": "NV9R3LZQoavl" }, { "cell_type": "markdown", "source": [ "There are other optimized solutions (like in https://github.com/pytorch/opacus) to computing per-sample-gradients in PyTorch that also perform better than the naive method. But it’s cool that composing `vmap` and `grad` give us a nice speedup.\n", "\n", "\n", "In general, vectorization with vmap should be faster than running a function in a for-loop and competitive with manual batching. There are some exceptions though, like if we haven’t implemented the vmap rule for a particular operation or if the underlying kernels weren’t optimized for older hardware (GPUs). If you see any of these cases, please let us know by opening an issue at our [GitHub](https://github.com/pytorch/functorch)!\n", "\n" ], "metadata": { "id": "UI74G9JarQU8" }, "id": "UI74G9JarQU8" } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "colab": { "name": "per_sample_grads.ipynb", "provenance": [] } }, "nbformat": 4, "nbformat_minor": 5 }