McMa
McMa t1_iqrd23r wrote
Regardless of your end goals and how much sense this makes or not, this is not too difficult to implement:
Let’s create a ”virtual hidden layer” between input and hidden layers. The weights (and biases) between the input and the “virtual hidden layer” are normal weights, just like the ones in your network. The weights between the virtual hidden layer and the original hidden layer are frozen with the value of 1, bias with 0. And voilà, now you are passing the formerly unknown vales from the hidden unit nodes to each other.
I’m not sure what this would be good for, but let us know if you find something interesting.
McMa t1_iqtcdfp wrote
Reply to comment by abystoma in [D] Hidden unit connected to each other in a single layer by [deleted]
Just a normal hidden layer. The connection between nodes of the same layer in your schema can be thought of as two layers where each node forwards it’s exact value to the corresponding node of the next layer (weight frozen to 1), and the rest of the weights get trained.