• Docs >
  • Sparse Data Operators
Shortcuts

Sparse Data Operators

CUDA Operators

at::Tensor expand_into_jagged_permute_cuda(const at::Tensor &permute, const at::Tensor &input_offsets, const at::Tensor &output_offsets, int64_t output_size)

expand_into_jagged_permute expand the sparse data permute index from table dimension to batch dimension, for cases where the sparse features has different batch sizes across ranks.

Parameters:
  • permute – the table level permute index.

  • input_offsets – the exclusive offsets of table-level length.

  • output_offsets – the exclusive offsets of table-level permuted length. The op expands the permute from table level to batch level by contiguously mapping each bag of its corresponding tables to the position the batch sits on after feature permute. We will derive offset array of table and batch to compute the output permute.

Returns:

The output follows the following formula:

output_permute[table_offset[permute[table]] + batch] <- bag_offset[batch]

CPU Operators

std::tuple<at::Tensor, at::Tensor> generic_histogram_binning_calibration_by_feature_cpu(const at::Tensor &logit, const at::Tensor &segment_value, const at::Tensor &segment_lengths, int64_t num_segments, const at::Tensor &bin_num_examples, const at::Tensor &bin_num_positives, const at::Tensor &bin_boundaries, double positive_weight, int64_t bin_ctr_in_use_after = 0, double bin_ctr_weight_value = 1.0)

Divide the prediction range (e.g., [0, 1]) into B bins. In each bin, use two parameters to store the number of positive examples and the number of examples that fall into this bucket. So we basically have a histogram for the model prediction. As a result, for each bin, we have a statistical value for the real CTR (num_pos / num_example

). We use this statistical value as the final calibrated prediction if the pre-cali prediction falls into the corresponding bin. In this way, the predictions within each bin should be well-calibrated if we have sufficient examples. That is, we have a fine-grained calibrated model by this calibration module. Theoretically, this calibration layer can fix any uncalibrated model or prediction if we have sufficient bins and examples.

/ / An extension of histogram binning calibration model which divides data into / bins based on one specific feature and prediction/ECTR range. In each bin, / use two parameters to store the number of positive examples and the number / of examples that fall into this bucket. So we basically have a histogram for / the model prediction. As a result, for each bin, we have a statistical / value for the real CTR (num_pos / num_example). We use this statistical / value as the final calibrated prediction if the pre-cali prediction falls / into the corresponding bin. In this way, the predictions within each bin / should be well-calibrated if we have sufficient examples. That is, we have / a fine-grained calibrated model by this calibration module. Theoretically, / this calibration layer can fix any uncalibrated model or prediction if we / have sufficient bins and examples. / /

Same as above, but accepts generic “bin_boundaries”, which is assumed to be sorted.

Parameters:
  • logit – is input tensor before applying Sigmoid. Assumes positive weight calibration is used for calibartion target, and

  • positive_weight – is passed as input argument. The number of bins is automatically derived from bin_num_examples, and bin_num_positives, all of which should be the same size. /

  • lower/upper_bound – Bounds of the bins. /

  • bin_ctr_in_use_after – We will use the calibration_target for the / final calibrated prediction if we don’t have sufficient examples. Only / use the statistical value of bin CTR after we observe / bin_ctr_in_use_after examples that fall in this bin. Default value: 0. /

  • bin_ctr_weight_value – Weight for statistical value of bin CTR. / When this is specified, we perform a weighted sum for the statisctical / bin CTR and the calibration_target: / / final_calibrated_prediction = bin_ctr_weight * bin_ctr + (1 - / bin_ctr_weight) * calibration_target / / Default value: 1.0 std::tuple<at::Tensor, at::Tensor> histogram_binning_calibration_cpu( const at::Tensor& logit, const at::Tensor& bin_num_examples, const at::Tensor& bin_num_positives, double positive_weight, double lower_bound = 0.0, double upper_bound = 1.0, int64_t bin_ctr_in_use_after = 0, double bin_ctr_weight_value = 1.0);

  • logit – is input tensor before applying Sigmoid. / / Assumes positive weight calibration is used for calibartion target, and /positive_weight is passed as input argument. /

  • segment_value/lengths – Values and lengths in KeyJaggedTensor. / Assumes value of length is either 0 or 1. /

  • num_bins – # of bins is no longer the same as bin_num_examples, / and bin_num_positives, all of which should be still the same size. /

  • lower/upper_bound – Bounds of the bins. /

  • bin_ctr_in_use_after – We will use the calibration_target for / the final calibrated prediction if we don’t have sufficient examples. / Only use the statistical value of bin CTR after we observe / bin_ctr_in_use_after examples that fall in this bin. Default value is 0. /@parambin_ctr_weight_value Weight for statistical value of bin CTR. When / this is specified, we perform a weighted sum for the statisctical / bin CTR and the calibration_target: / / final_calibrated_prediction = bin_ctr_weight * bin_ctr + (1 - / bin_ctr_weight) * calibration_target. / / Default value: 1.0

Returns:

[calibrated_prediction, bin_ids]

Returns:

[calibrated_prediction, bin_ids] /

Returns:

calibrated_prediction.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources