I saw What's the triplet loss back propagation gradient formula?
For gradient, anchor is n-p, positive is p-a and negative is a-n.
But from 80 line to 92 line of tripletLossLayer is different to this.
Namely, for gradient anchor is p - n and positive is p - a.
which is really right?
Asked
Active
Viewed 437 times
1
Community
- 1
- 1
guochan zhang
- 15
- 4
1 Answers
0
Lines 80-92 in triplet_loss_layer.cpp are part of forward_cpu function - that is the actual loss computation and NOT the gradient computation.
The gradient is computed in backward_cpu, where you can see that each bottom is assigned its diff according to the derivation presented here.
-
1thanks for your help. I am training the tripletloss layer for face verification. GoogleNet or VggNet talked performance is boosted using this layer, but i am not. I want your help – guochan zhang Mar 21 '16 at 08:08