Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeepFM模型中的损失计算 #18

Open
eralvc opened this issue Oct 26, 2018 · 5 comments
Open

DeepFM模型中的损失计算 #18

eralvc opened this issue Oct 26, 2018 · 5 comments

Comments

@eralvc
Copy link

eralvc commented Oct 26, 2018

165行左右,构建deep全连接时,给变量都加上了l2正则
y_deep = tf.contrib.layers.fully_connected(inputs=deep_inputs, num_outputs=1, activation_fn=tf.identity,
weights_regularizer=tf.contrib.layers.l2_regularizer(l2_reg), scope='deep_out')
然后在189行左右定义损失函数
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)

我理解,上面的损失函数没有把前面通过weights_regularizer正则的变量取出来
所以应该改成
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) +
l2_reg * tf.nn.l2_loss(FM_W) +
l2_reg * tf.nn.l2_loss(FM_V)+
tf.reduce_sum(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))

@lambdaji
Copy link
Owner

@JenkinsY94
Copy link

https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + 
l2_reg * tf.nn.l2_loss(FM_W) + 
l2_reg * tf.nn.l2_loss(FM_V)+ 

此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多

@nikenj
Copy link

nikenj commented Nov 12, 2018

https://github.com/lambdaji/tf_repos/blob/master/deep_ctr/Model_pipeline/DeepFM.py#L189

loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y, labels=labels)) + 
l2_reg * tf.nn.l2_loss(FM_W) + 
l2_reg * tf.nn.l2_loss(FM_V)+ 

此处的loss如果设置为mini-batch相关参数的loss,back-propagation时会快很多

这个需要怎么设置呢?

@qk-huang
Copy link

我测试了下,很奇怪。加了和没加效果几乎相同。

@nikenj
Copy link

nikenj commented Nov 14, 2018

使用batch_normal_layer,不需要加上这个ops嘛?
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants