Real_Revenue_4741
Real_Revenue_4741 t1_jbteqca wrote
Reply to comment by science-raven in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
YOLO is not enough to create these robots. The difficult part of robotics is being able to actuate from visual feedback. The method you are mentioning is called "visual servoing," and will not be robust enough to actually work. Also, the under 3K price point is quite a bit lower than what you would expect for these projects.
Real_Revenue_4741 t1_itdehtz wrote
Reply to comment by hellrail in [D] What things did you learn in ML theory that are, in practice, different? by 4bedoe
It should be from MIT (try copying/pasting the address linked above)
Real_Revenue_4741 t1_itddwyd wrote
Reply to comment by hellrail in [D] What things did you learn in ML theory that are, in practice, different? by 4bedoe
I believe you are looking at the wrong slides. Reddit did something weird with the hyperlink
Real_Revenue_4741 t1_itd9ziu wrote
Reply to comment by hellrail in [D] What things did you learn in ML theory that are, in practice, different? by 4bedoe
- Even the assumptions themselves can be in contention. The point of the CPT example was to show that the assumptions that theories make often need to be revisited. Therefore, a deviation between theory and practice can, and often will, take the form of a change of assumptions about when the theory can be applied.
- http://www.ai.mit.edu/courses/6.867-f04/lectures/lecture-12-ho.pdf We can clearly see that in the model selection slide, it states to choose the model with the lowest upper bound on the expected error.
Real_Revenue_4741 t1_itd229a wrote
Reply to comment by hellrail in [D] What things did you learn in ML theory that are, in practice, different? by 4bedoe
- This is a merely a matter of semantics. When a theory doesn’t extrapolate nicely in certain scenarios, you can state that the theory is incorrect. However, another way to view it is that there are still certain considerations that the theory is missing. It is often difficult to known exactly when a theory can and cannot be applied. Since you come from a physics background, a good example is CPT symmetry. Up until 1957, physicists believed CPT could be broadly applied to all of physical laws and until 1964 that CP symmetry cannot be violated as well. However, they were later disproven. You can say that the CPT theory was not applied correctly in those cases because they do not lie within the constraints we make today, but that is retroactively changing the story.
- Empirical risk minimization, VC dimension, and the classic bias and variance tradeoff are taught in undergrad machine learning classes and wete considered a well-established theories for a while. It goes without saying that there is no possible way to distinguish whether a scientific theory is truly infallible or may be refuted in the future.
Real_Revenue_4741 t1_itcpow5 wrote
Reply to comment by hellrail in [D] What things did you learn in ML theory that are, in practice, different? by 4bedoe
When practice deviates from theory, this usually means that the theory does not well-capture the results that people are getting in practice. This does not necessarily mean that the theory is incorrect, but that usually the implications of the theory and it’s common inductions don’t capture the entire picture.
A big example of this was when ML theorists were trying to capture the size of the hypothesis class through it’s VC dimension. The common theory was that larger neural networks had more parameters and a higher VC dimension, and thus, led to higher model variance and higher capacity to overfit. This hypothesis was empirically nullified ad scientists found that larger models tended to learn from better from data. This was described by a well-known paper which termed the phenomenon “double deep descent.”
One last note is that I have yet to see a scientist who doesn’t understand the limitations and fallibility of scientific theories. It is usually the science “enthusiasts” who naively misinterpret their implications and make unfounded claims.
Real_Revenue_4741 t1_jbuhm8y wrote
Reply to comment by science-raven in [D] Development challenges of an autonomous gardening robot using object detection and mapping. by science-raven
In essence, "interacting with an object with an end effector" requires a lot of precision. It is more difficult than it seems to get it working on all types of weeds/plants. Weeding/digging requires a specific motion that may be difficult to accomplish without tactile feedback--it is not as simple as putting the tool at the right location. Irrigation may be easier because there is not much interaction with the environment required. It will be pretty simple to get a system that works with suboptimal performance, but this would be not be enough to automate gardening without human intervention.