Skip to content

Instantly share code, notes, and snippets.

@sfblake
Created December 16, 2019 13:20
Show Gist options
  • Save sfblake/eade3b56e509da5bcc081ab37c4ee69f to your computer and use it in GitHub Desktop.
Save sfblake/eade3b56e509da5bcc081ab37c4ee69f to your computer and use it in GitHub Desktop.
Negative binomial loss function
import tensorflow as tf
def negative_binomial_loss(y_true, y_pred):
"""
Negative binomial loss function.
Assumes tensorflow backend.
Parameters
----------
y_true : tf.Tensor
Ground truth values of predicted variable.
y_pred : tf.Tensor
n and p values of predicted distribution.
Returns
-------
nll : tf.Tensor
Negative log likelihood.
"""
# Separate the parameters
n, p = tf.unstack(y_pred, num=2, axis=-1)
# Add one dimension to make the right shape
n = tf.expand_dims(n, -1)
p = tf.expand_dims(p, -1)
# Calculate the negative log likelihood
nll = (
tf.math.lgamma(n)
+ tf.math.lgamma(y_true + 1)
- tf.math.lgamma(n + y_true)
- n * tf.math.log(p)
- y_true * tf.math.log(1 - p)
)
return nll
@ThibaultDef
Copy link

ThibaultDef commented May 25, 2023

Your equation finally equals to - log( (n + k -1 choose k) p^n (1-p)^k ).
Here it assumes than n represents the number of positive case among the n+k-1 instances. So you should also replace n by y_true in line 33, and y_true by n-1 (or by n if you replace n by n+1 in line 30 and in line 32) in line 34.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment