Viewing a single comment thread. View all comments

MIKOLAJslippers t1_izy3vs3 wrote

I don’t think anyone really wants to do your homework for you. Because is seems unlikely you weren’t told how to do this is some form

What’s your starting point here? Are you using python? Do you get numpy?

The maths you want is something like:

acts = act(in X w1)

To get hidden activation where act is your activation function, X is a matrix multiply and w1 has dims of len(in) by hidden size.

Then you do:

out = act(acts X w2)

Where w2 has dims len(acts) by output dims

3

Melodic-Oil-1971 t1_izy519e wrote

Yeah I got those I know all the basic I did all the process on the data I just can't start building the neural

0

MIKOLAJslippers t1_izy9s51 wrote

So you will need to implement that maths in your chosen language (easiest would be python and numpy as the syntax is almost the same as I shared). That’s the forward pass from inputs to outputs. You will also need to initialise the weight matrices w1 and w2 to something. Do you have any pretrained weights you can test it with? You may also need to add biases after the matmuls depending on the brief. Usually the case but not necessarily essential to make it train.

Presumably you will also need to then train your network so it’ll get a bit more tricky. You’ll need to implement a loss function based on error between the outputs and some target variable. Once you have the loss you can then use chain rule back through the network to get the delta w (weight gradients) for each weight (w1 and w2 and also any biases if you add those). You’ll then update your weights using some update rule which is usually just multiplying the weight gradients by the learning rate (usually denoted alpha).

Is any of this helpful? Which bit do you still not understand?

1