Submitted by MLNoober t3_xuogm3 in MachineLearning
nemoknows t1_iqwq1aw wrote
Reply to comment by happy_guy_2015 in [D] Why restrict to using a linear function to represent neurons? by MLNoober
Also, a linear transform of a linear transform is just a linear transform. You need those activation functions in between your layers, otherwise multiple layers is pointless.
Viewing a single comment thread. View all comments