Viewing a single comment thread. View all comments

hellrail t1_itdde95 wrote

  1. I disagree. Give me an example where the assumption is just a matter of semantics.

I state that every correct (and by that i mean scientific) formulation of assumptions can be even abstracted and formalized, and even incorporated in an automated algorithm yielding the answer weather this assumption is true or not, w.r.t the theories assumptions.

Proof: take an arbritrary assumption formulation and convert it to mathmatical formulation. Then us goesels numbers to formalize.

If you say now, well the conversion to a mathmatical formulation can be ambigious, i would ask you to clearly state the assumptions in a language that is suited for a scientific discussion.

  1. On the model selection slide, i see its just stated that model/hyperp optimization aims at selecting optimal parameters. Thats ofc trivially true.

If you Talk about the subsequent slides, i see it introduces one idea, to get some guidance in finding the opt settings, called bayesian occams razor. Occams razor is a HEURISTICS. Thats so to say the opposite of a rule/theory.

A property of heuristics is explicitly, that it does not guarantee to yield a true or optimal solution. A heuristics can by definition not be wrong or correct. its a heuristics, a strategy that has worked for many ppl in the past and might fail in many cases. A heuristics does not claim to provide a found rule or similiar.

Now on the last slide they even address the drawbacks of this heuristics. What do you expect more?

As i expected, this is not an example of a theory stating something that deviates from reality. Its just a HEURISTIC strategy they give you at hand, when you want to start with hyperparameter finding but you have no clue how. Thats when you go back to heuristics (please wikipedia heuristics) and i bet this proposed heuristics is not the worst you can do even today, where more knowledge has been accquired.

0

Real_Revenue_4741 t1_itddwyd wrote

I believe you are looking at the wrong slides. Reddit did something weird with the hyperlink

1

hellrail t1_itde28m wrote

Then please point mento the right slide by gibing the slide number

1

Real_Revenue_4741 t1_itdehtz wrote

It should be from MIT (try copying/pasting the address linked above)

1

hellrail t1_itewyaw wrote

One thing i must add regarding the topic of presentation as "established knowledge".

The lecture you quoted, is lecture number 12. It is embedded in a course. There are of course lecture 11, 10, 9 etc. If you check these, which are also accessible with slightly midifying the given link, you see the context of this lecture. Specifically, a bunch of classifiers are explicitly introduced, and the v-dim theory on lecture 12 are still valid of these. The course does not adress deep networks yet.

So its a bit unfair to say these lecture does teach you a theory that deviates. Its does not deviate for the there introduced classifiers.

1

hellrail t1_itdlgvb wrote

Ok found the right one.

Well, generally i must say good example. I accepted it at least as a very interesting example to talk about, worth mentioning in this context.

Nevertheless, its still valid for all NON cnn, resnet, transformer models.

Taking into account, that its based on an old theory (prior 1990), where these deep networks have not existed yet, one might take into account its limitedness (as it doesnt try to model effects taking place during learning of such complex deep models, which hasnt been a topic back then).

So if I would be really mean, i would say u cant expect a theory making predictions about entities (in this case modern deep networks) that had not been invented yet. One could say that the v-dim theory's assumptions include the assumption of a "perfect" learning procedure (therefore exclude any dynamic effects from the learning procedure), which is still valid for decision trees, random forrest, svms, etc, which have their relevance for many problems.

But since im not that mean, i admit that this observations in these modern networks do undermine the practicability of the V-dimension view for modern deep networks of the mentioned types, and that must have been a mediocre surprise before having tried out if v-dims work for cnn/resnet/transformers, therefore good example.

1