audtorch.metrics.functional

The goal of the metrics functionals is to provide functions that work independent on the dimensions of the input signal and can be used easily to create additional metrics and losses.

pearsonr

audtorch.metrics.functional.pearsonr(x, y, batch_first=True)

Computes Pearson Correlation Coefficient across rows.

Pearson Correlation Coefficient (also known as Linear Correlation Coefficient or Pearson’s \(\rho\)) is computed as:

\[\rho = \frac {E[(X-\mu_X)(Y-\mu_Y)]} {\sigma_X\sigma_Y}\]

If inputs are matrices, then then we assume that we are given a mini-batch of sequences, and the correlation coefficient is computed for each sequence independently and returned as a vector. If batch_fist is True, then we assume that every row represents a sequence in the mini-batch, otherwise we assume that batch information is in the columns.

Warning

We do not account for the multi-dimensional case. This function has been tested only for the 2D case, either in batch_first==True or in batch_first==False mode. In the multi-dimensional case, it is possible that the values returned will be meaningless.

Parameters:
  • x (torch.Tensor) – input tensor
  • y (torch.Tensor) – target tensor
  • batch_first (bool, optional) – controls if batch dimension is first. Default: True
Returns:

correlation coefficient between x and y

Return type:

torch.Tensor

Note

\(\sigma_X\) is computed using PyTorch builtin Tensor.std(), which by default uses Bessel correction:

\[\sigma_X=\displaystyle\frac{1}{N-1}\sum_{i=1}^N({x_i}-\bar{x})^2\]

We therefore account for this correction in the computation of the covariance by multiplying it with \(\frac{1}{N-1}\).

Shape:
  • Input: \((N, M)\) for correlation between matrices, or \((M)\) for correlation between vectors
  • Target: \((N, M)\) or \((M)\). Must be identical to input
  • Output: \((N, 1)\) for correlation between matrices, or \((1)\) for correlation between vectors

Examples

>>> import torch
>>> _ = torch.manual_seed(0)
>>> input = torch.rand(3, 5)
>>> target = torch.rand(3, 5)
>>> output = pearsonr(input, target)
>>> print('Pearson Correlation between input and target is {0}'.format(output[:, 0]))
Pearson Correlation between input and target is tensor([ 0.2991, -0.8471,  0.9138])

concordance_cc

audtorch.metrics.functional.concordance_cc(x, y, batch_first=True)

Computes Concordance Correlation Coefficient across rows.

Concordance Correlation Coefficient is computed as:

\[\rho_c = \frac {2\rho\sigma_X\sigma_Y} {\sigma_X\sigma_X + \sigma_Y\sigma_Y + (\mu_X - \mu_Y)^2}\]

where \(\rho\) is Pearson Correlation Coefficient, \(\sigma_X\), \(\sigma_Y\) are the standard deviation, and \(\mu_X\), \(\mu_Y\) the mean values of \(X\) and \(Y\) accordingly.

If inputs are matrices, then then we assume that we are given a mini-batch of sequences, and the concordance correlation coefficient is computed for each sequence independently and returned as a vector. If batch_fist is True, then we assume that every row represents a sequence in the mini-batch, otherwise we assume that batch information is in the columns.

Warning

We do not account for the multi-dimensional case. This function has been tested only for the 2D case, either in batch_first==True or in batch_first==False mode. In the multi-dimensional case, it is possible that the values returned will be meaningless.

Note

\(\sigma_X\) is computed using PyTorch builtin Tensor.std(), which by default uses Bessel correction:

\[\sigma_X=\displaystyle\frac{1}{N-1}\sum_{i=1}^N({x_i}-\bar{x})^2\]

We therefore account for this correction in the computation of the concordance correlation coefficient by multiplying all standard deviations with \(\frac{N-1}{N}\). This is equivalent to multiplying only \((\mu_X - \mu_Y)^2\) with \(\frac{N}{ N-1}\). We choose that option for numerical stability.

Parameters:
  • x (torch.Tensor) – input tensor
  • y (torch.Tensor) – target tensor
  • batch_first (bool, optional) – controls if batch dimension is first. Default: True
Returns:

concordance correlation coefficient between x and y

Return type:

torch.Tensor

Shape:
  • Input: \((N, M)\) for correlation between matrices, or \((M)\) for correlation between vectors
  • Target: \((N, M)\) or \((M)\). Must be identical to input
  • Output: \((N, 1)\) for correlation between matrices, or \((1)\) for correlation between vectors

Examples

>>> import torch
>>> _ = torch.manual_seed(0)
>>> input = torch.rand(3, 5)
>>> target = torch.rand(3, 5)
>>> output = concordance_cc(input, target)
>>> print('Concordance Correlation between input and target is {0}'.format(output[:, 0]))
Concordance Correlation between input and target is tensor([ 0.2605, -0.7862,  0.5298])