Skip to content Skip to sidebar Skip to footer

Adding A Constant To Loss Function In Tensorflow

I have asked a similar question but no response. So I try it again, I am reading a paper which suggest to add some value which is calculated outside of Tensorflow into the loss fun

Solution 1:

In the equation above as you can see, there can a chance when the outcome is very low i.e. the problem of vanishing gradient may occur.

In order to alleviate that, they are asking to add a constant value to the loss.

Now, you can a simple constant such 1, 10 or anything, or by something proportional to what they have said.

You can easily calculate the expectation from the ground truth for one part. The other part is the tricky one as you won't have values until you train and calculating them on the fly is not wise.

That term means how much difference between the ground truth and predictions will be there.

So, if you are going to implement this paper, then, add a constant value of 1 to your loss, so it doesn't vanish.

Solution 2:

It seems that you want to be able to define your own loss. Also, I am not sure whether you use actual Tensorflow or Keras. Here is a solution with Keras:

import tensorflow.keras.backend as K

defmy_custom_loss(precomputed_value):
    defloss(y_true, y_pred):
        return K.binary_crossentropy(y_true, y_pred) + precomputed_value
    return loss

my_model = Sequential()
my_model.add(...)
# Add any layer there

my_model.compile(loss=my_custom_loss(42))

Inspired from https://towardsdatascience.com/advanced-keras-constructing-complex-custom-losses-and-metrics-c07ca130a618

EDIT: The answer was only for adding a constant term, but I realize that the term suggested in the paper is not constant.

I haven't read the paper, but I suppose from the cross-entropy definition that sigma is the ground truth and p is the predicted value. If there are no other dependency, the solution can even be simpler:

def my_custom_loss(y_pred, y_true):
    norm_term = K.square( K.mean(y_true) - K.mean(y_pred) )
    return K.binary_crossentropy(y_true, y_pred) + norm_term

# ...

my_model.compile(loss=my_custom_loss)

Here, I assumed the expectations are only computed on each batch. Tell me whether it is what you want. Otherwise, if you want to compute your statistics at a different scale, e.g. on the whole dataset after every epoch, you might need to use callbacks. In that case, please give more precision on your problem, adding for instance a small example for y_pred and y_true, and the expected loss.

Post a Comment for "Adding A Constant To Loss Function In Tensorflow"