CosineEmbeddingLoss¶
- class torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')[source][source]¶
Creates a criterion that measures the loss given input tensors , and a Tensor label with values 1 or -1. Use () to maximize the cosine similarity of two inputs, and () otherwise. This is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
- Parameters
margin (float, optional) – Should be a number from to , to is suggested. If
margin
is missing, the default value is .size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (str, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input1: or , where N is the batch size and D is the embedding dimension.
Input2: or , same shape as Input1.
Target: or .
Output: If
reduction
is'none'
, then , otherwise scalar.
Examples:
>>> loss = nn.CosineEmbeddingLoss() >>> input1 = torch.randn(3, 5, requires_grad=True) >>> input2 = torch.randn(3, 5, requires_grad=True) >>> target = torch.ones(3) >>> output = loss(input1, input2, target) >>> output.backward()