Comments

You must log in or register to comment.

isuckwithusernames t1_izy2zwq wrote

That information has unfortunately been lost in time.

Put your exact question into google. The first... 50 links explain how to do it.

4

MIKOLAJslippers t1_izy3vs3 wrote

I don’t think anyone really wants to do your homework for you. Because is seems unlikely you weren’t told how to do this is some form

What’s your starting point here? Are you using python? Do you get numpy?

The maths you want is something like:

acts = act(in X w1)

To get hidden activation where act is your activation function, X is a matrix multiply and w1 has dims of len(in) by hidden size.

Then you do:

out = act(acts X w2)

Where w2 has dims len(acts) by output dims

3

LW_Master t1_izy5gy1 wrote

The only method I know is matrix multiplication but that was just the forward part. The backward part need an understanding of partial derivation. The code will adapt according to the language...

1

MIKOLAJslippers t1_izy9s51 wrote

So you will need to implement that maths in your chosen language (easiest would be python and numpy as the syntax is almost the same as I shared). That’s the forward pass from inputs to outputs. You will also need to initialise the weight matrices w1 and w2 to something. Do you have any pretrained weights you can test it with? You may also need to add biases after the matmuls depending on the brief. Usually the case but not necessarily essential to make it train.

Presumably you will also need to then train your network so it’ll get a bit more tricky. You’ll need to implement a loss function based on error between the outputs and some target variable. Once you have the loss you can then use chain rule back through the network to get the delta w (weight gradients) for each weight (w1 and w2 and also any biases if you add those). You’ll then update your weights using some update rule which is usually just multiplying the weight gradients by the learning rate (usually denoted alpha).

Is any of this helpful? Which bit do you still not understand?

1