-
Notifications
You must be signed in to change notification settings - Fork 319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DeepFM模型中的损失计算 #18
Comments
赞 |
https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189
此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多 |
这个需要怎么设置呢? |
我测试了下,很奇怪。加了和没加效果几乎相同。 |
使用batch_normal_layer,不需要加上这个ops嘛? |
165行左右,构建deep全连接时,给变量都加上了l2正则
y_deep = tf.contrib.layers.fully_connected(inputs=deep_inputs, num_outputs=1, activation_fn=tf.identity,
weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg), scope='deep_out')
然后在189行左右定义损失函数
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)
我理解,上面的损失函数没有把前面通过weights_regularizer正则的变量取出来
所以应该改成
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)+
tf.reduce_sum(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))
The text was updated successfully, but these errors were encountered: