Skip to content
This repository has been archived by the owner on May 21, 2024. It is now read-only.

Policy output tends to increase/decrease to 1 and freeze there (Using with TORCS) #7

Open
Amir-Ramezani opened this issue Mar 28, 2017 · 0 comments

Comments

@Amir-Ramezani
Copy link

Hi,

I want to appreciate for your code and also have some problems and questions, I hope you can help me on them.

The first problem is that I tried to use your code with TORCS and unfortunately it seems to me that algorithm cannot learn. Looking at the Policy output, it always goes to 1 very fast and stays there for almost ever. In the following there is a sample output:

Action: [-0.87523234 0.6581533 -0.11968148]
Rewards: [[ 0.03384038 0.03384038 0.03384038]
[ 0.35649604 0.35649604 0.35649604]
[ 0.32099473 0.32099473 0.32099473]
[ 0.35958865 0.35958865 0.35958865]]
Policy out: [[-0.99999988 -0.99999821 0.99996996]
[-1. -1. 1. ]
[-1. -1. 1. ]
[-1. -1. 1. ]]
next_state_scores: [[-0.63144624 0.52066976 0.46819916]
[-0.94268441 0.87833565 0.83462358]
[-0.93066931 0.85972118 0.8131395 ]
[-0.96144539 0.90986496 0.8721453 ]]
ys: [[-0.59129143 0.54930341 0.49735758]
[-0.57676154 1.22604835 1.18277335]
[-0.6003679 1.17211866 1.12600279]
[-0.59224224 1.260355 1.22301245]]
qvals: [[-0.53078967 0.67661011 0.5653314 ]
[-0.82462442 0.92710859 0.85478604]
[-0.75297546 0.87841618 0.78751284]
[-0.87785703 0.95719421 0.90265679]]
temp_diff: [[ 0.06050175 0.1273067 0.06797382]
[-0.24786288 -0.29893976 -0.32798731]
[-0.15260756 -0.29370248 -0.33848995]
[-0.28561479 -0.30316079 -0.32035565]]
critic_loss: 1.71624
action_grads: [[ 2.43733026e-04 -1.14602779e-04 -1.56897455e-04]
[ 9.41888866e-05 -3.77293654e-05 -7.07318031e-05]
[ 1.33200549e-04 -5.56089872e-05 -9.60492107e-05]
[ 6.47661946e-05 -2.49565346e-05 -5.03367264e-05]]

My next question is about optimizing the critic network, since its learning rate is different with actor network, how did you actually optimized it? And can you tell me why your initialization for the actor and critic network layers are different from those mentioned in the paper?

Thanks

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant