Submitted by Linear-- t3_11bcklh in MachineLearning
AmalgamDragon t1_ja5lz5b wrote
Reply to comment by currentscurrents in [D] Isn't self-supervised learning(SSL) simply a kind of SL? by Linear--
This really comes down to how 'reward' is defined. I think we likely disagree on that definition, with yours being a lot narrower then mine is. For example, during the cooking process, there is usually a point before the meal is done where it 'smells good', which is a reward. There's dopamine release as well, which could be triggered when completing some of the steps (don't know if that's the case or not), but simply observing that a step is complete is rewarding for lots of folks.
> Pure RL will quickly teach you not to touch the burner, but it really struggles with tasks that involve planning or delayed rewards.
Depends on which algorithms you're using, but PPO can handle this quite well.
currentscurrents t1_ja5n5xi wrote
Those are all internal rewards, which your brain creates because it knows (according to the world model) that these events lead to real rewards. It can only do this because it has learned to predict the future.
>PPO can handle this quite well.
"Quite well" is still trying random actions millions of times. World modeling allows you to learn from two orders of magnitude less data.
Viewing a single comment thread. View all comments