XecutionStyle
XecutionStyle t1_j922qne wrote
Reply to comment by Anti-Queen_Elle in [D] Please stop by [deleted]
I don't know
XecutionStyle t1_j91aa70 wrote
Reply to [D] Please stop by [deleted]
You don't know the capacity of what you're making until you make it though
XecutionStyle t1_j78shby wrote
Reply to comment by hiro_ono in please help a bunch of students?(with pre annotated data set) we were assigned to this task with no prior knowledge of ML i don't know where to begin with we tried a couple of method which ultimately failed id be thankful for anyone who would tell me in steps what to do with this data[D] by errorr_unknown
^ Simply this
XecutionStyle t1_j78jsua wrote
Reply to please help a bunch of students?(with pre annotated data set) we were assigned to this task with no prior knowledge of ML i don't know where to begin with we tried a couple of method which ultimately failed id be thankful for anyone who would tell me in steps what to do with this data[D] by errorr_unknown
Error drives learning:
If Error ∝ (Target - Output)
Then you start your network with random weights (so the Output is random and error is large). When you pass the error back through the network, the weights are adjusted proportional to the error. Over time, the weights will settle where (Target - Output) is as low as possible.
This concept is true for any situation: if you're working with image data, no matter what architecture is used to produce the Output, you still compare it with Target (or 'label', Length of Pagrus for your case), and pass the Error back through the network to improve it iteratively.
Try building the simplest neuron: 1 input -> 1 output and use backpropagation to train until convergence.
For your assignment you could use a CNN (but a simple Feed-forward network would work too as you're just outputting 1 value for total length, so it's really a regression task) to get the Output, and train its weights which are internally shared (the window you shift across the image) which are trained the same way. You compute the output, compare it with the actual length of the Pagrus fish (you passed in as input), get the Error and the method above to improve the Network for the task applies.
XecutionStyle t1_j6ownha wrote
Reply to 16-bit Neural Networks by [deleted]
I think it'll happen silently as an update to existing frameworks because there isn't much to optimize.
XecutionStyle t1_j6ggq37 wrote
Reply to comment by suflaj in Why did the original ResNet paper not use dropout? by V1bicycle
BN is used to reduce covariate shift, it just happened to regularize. Dropout as a regularizing technique didn't become big before ResNet (2014 vs. 2015).
I doubt what you're saying is true, that they're effectively the same. Try putting one after the other to see the effect. Two drop-out layers or BN layers in contrast have no problem co-existing.
edit: sorry what I mean is the variants of drop-out that work with CNNs (that don't have detrimental effects) haven't existed then.
XecutionStyle t1_j5hkq3m wrote
Reply to comment by BullyMaguireJr in arXiv Feed: Keep up with AI research, the easy way. by BullyMaguireJr
Excellent, thanks!
XecutionStyle t1_j5hj63c wrote
Is it possible to sign-up for the newsletter to periodically email me on certain topics (using the semantic search, say "motor control for robotics") or keywords like "VR/Wifi"?
XecutionStyle OP t1_j4yyvig wrote
Reply to comment by FastestLearner in A correct method beats pre-training any day by XecutionStyle
Yes. A "better" method makes less sense in context it seems.
Submitted by XecutionStyle t3_10fslf2 in deeplearning
XecutionStyle t1_j4v0dui wrote
Reply to comment by tsgiannis in Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
When you replace the top layer and train the model, are the previous layers allowed to change?
XecutionStyle t1_j4uu8r4 wrote
Reply to Why a pretrained model returns better accuracy than the implementation from scratch by tsgiannis
Are you fixing the weights of the earlier layers?
XecutionStyle t1_iz0uwak wrote
Reply to Since AI is poised to disrupt/aid in/replace many technical and creative jobs. Is it logical to assume that studying the field of AI/machine learning/DL is a way to future proof your employment for a while? by pawnh4
It's hard to predict exactly, and requires one of two things:
a) Full-time research on new methods
b) Be the one with the breakthroughs
​
B) is hard and A) nobody I know pays for.
We're confined to jobs related to research or applying said research.
XecutionStyle t1_ir52ztc wrote
Reply to comment by CremeEmotional6561 in A wild question? Why CNNs are not aware of visual quality? [D] by ThoughtOk5558
That'd lower the confidence scores but relatively, they'll still be just as false-confident.
XecutionStyle t1_ir20qoc wrote
Reply to comment by saw79 in A wild question? Why CNNs are not aware of visual quality? [D] by ThoughtOk5558
I don't think it's nebulous. We infuse knowledge, bias, prior etc. like physics (in Lagrangian networks) all the time. I was just addressing his last point. There's no analytical solution for quality we can use as labels.
Networks can understand the difference between pretty and ugly semantically with tons of data, and tons of data only.
XecutionStyle t1_ir0fgw6 wrote
Reply to comment by ThoughtOk5558 in A wild question? Why CNNs are not aware of visual quality? [D] by ThoughtOk5558
See that's the problem. We benefit from eons for evolution to imprint what quality is (i.e. what correlates with real life) the most genetically.
To tell a CNN about quality without using a CNN to analyze is either cyclical or redundant. I'm afraid.
XecutionStyle t1_ir0dv73 wrote
How do you propose we define quality?
XecutionStyle t1_j99vve7 wrote
Reply to [D] Lack of influence in modern AI by I_like_sources
You show very specific issues that if you count all of them, is influence.