Submitted by I_like_sources t3_1163xou in MachineLearning
[removed]
Submitted by I_like_sources t3_1163xou in MachineLearning
[removed]
Do you agree with the lack of influence or do you see many models where it does not apply, that give you a great deal of tweaking abilities?
If the latter, name them, so we can discuss that.
I’m not sure I’m following you. Are you concerned that machine learning models are not easily customizable enough?
Is your trouble with the fundamental concept of transfer learning, that data selection and preparation is difficult, that convolutional neural networks are “black boxes”, or something else?
Good questions. Machine learning models are usually black boxes, that work or don't as expected.
There is no detail tweaking possible, only re-training. And the specification for good training data is vague at best.
That causes unnecessary frustration, time-consumption and is similar to the blind leading the blind.
The attitude is "offer just more and more data and hope the ai will figure things out, if not, offer more". I am sure I am not the only one who sees fault in this approach.
Tweaking a CNN without retraining makes it sound like you want a no-code option on your machine learning.
Totally agree that model interpretability is a challenge, but there is a whole subsection of our field working on that. The fundamental design of deep learning sort of precludes what you’re talking about - at least given our current understanding of model interpretation. At best, a model may be trained to give options on certain aspects based on its input (we see this all the time), but that doesn’t sound like what you want. It sounds like you want to be able to target specific and arbitrary components of an output and intuitively modify the weights of all nodes contributing to that part of the output - presumably in isolation.
I think your challenge might lie with a fundamental lack of understanding of how these models actually work. I don’t mean that as a dig - they’re complicated. I just want to help bring you to a place of understanding about why the field is how you’re experiencing it.
Not a huge fan of massive edits to original posts after people have started responding. Your newly added recommendations put an onerous responsibility on any open source authors who might make their work public as a hobby rather than a career.
What are your contributions to enabling users customizability of the result without retraining?
​
>Not a huge fan of massive edits to original posts after people have started responding.
I am not here to make you happy.
I think I’m gonna have to respectfully disagree on a lot of this. You’re right that it largely comes down to training data used. The thing that largely jumps out to me, though, in the examples you give and in your point (1) is that while you want to train using a large amount of data, especially for such large networks as those you suggest, is that while you need that large amount of data, you want to avoid overfitting your model to your data in the pursuit of accuracy or reliability or whatever metric you choose to determine how “good” or accurate your model is against some ground truth.
And while on the surface, NNs can definitely seem like or appear as though they’re “black boxes” or “we can’t accurately describe their structure or how they work”. That’s largely untrue. In fact, I would claim that it’s precisely because we can design and model NN structure and use a structure (both in terms of # of layers, connectedness between them, inputs, weights, biases, activation functions, etc.) that would lend itself best to a given purpose, that has allowed the field to come as far as it has, to generate the NNS in the examples you provide in the first place.
Sorry about the rant… I didn’t realize I get so passionate about NNs.
Neural networks are by design black boxes. You get great performance in exchange of explainability. This does not mean though that you have no control over the result.
> Example Stable Diffusion. You don't like what the eyes look like, yet you don't know how to make them more realistic.
ControlNet allows to guide image generation: https://github.com/lllyasviel/ControlNet.
> Example NLP. The chatbot does not give you logical answers? Try another random model.
Or give it some examples and ask to reason step by step. Alternatively, finetune it on examples. You can also teach LLM to use external tools, thus avoiding using LLM for reasoning.
It seems that someone is upset by not having free open box tool and requires the others to work more for free for his/her own purpose.
It's not about wanting a free open box tool. It's about the lack of transparency and accountability in the AI community. Developers need to take responsibility for their creations and provide support and feedback to users, particularly in critical applications like healthcare or finance. By providing more transparency and support, we can improve the quality and reliability of AI systems, which benefits everyone in the field.
I agree with you somewhat but this isn't windows 11. We're working on some of the more experimental tech rather than stable tech that has been out for years. all of the things you are asking for take time to be implemented, and without predefined systems for how to create these interactions it's pretty difficult to do.
“Critical applications” will be paying for the type of support you expect, you don’t need to worry about them. Taking feedback, FAQs, and documenting code updates are completely un-novel ideas that exist in industry SAAS products today and they will exist for ML models. They just require you to, you know, actually pay for the developers’ time.
[removed]
These examples are just wrong, OP. For SD (example 1) there are multiple avenues for making model updates to fine tune. I guess I think your base premise is incorrect
Yes this reads like an undergrad or someone entirely outside the field who thinks that no one has thought of these concepts before. They are trying a little Cunningham’s Law by stating that nothing is being done in these areas and hoping that someone provides the correct information rather than simply ask the question of what is being done to address these issues.
If you don’t like the trained matrix, OP, you can go train your own
You can fine tune stable diffusion,TTS and NLP. You can't expect authors to tend to every need for users they gave you the tool and have no requirement to teach you how to use it. Yes some models can't be fine tuned but in 99% of cases there is a different one you can fine tuned.
If you really don't like what's out there make your own, the papers exist.
I empathize, but I'd be curious what you think about this line of thought: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Both are not well related. You seem to argue that you should let AI do it's thing, what it's good at, without interfering, yet keep in mind that results are for humans, not for computers.
I disagree. They are completely related, and directly to the black box problem.
I wish I found this article a month ago, because it sums up a lot of the 'ai's are unknowable' nonsense.
Being a blackbox, is not an inherent quality of an AI. It's an inherent quality of a badly designed AI. Eventually, we will have methods that allow us to query why a particular result was given.
They are unknowable, because we have not designed them to be. The tech is in it's infancy. Give it time.
[removed]
What is your deal? Why are you being such a dick to everyone? It seems like you just want to yell at people, not have a discussion.
They just want to complain about free products not providing the level of support they expect while probably not contributing much, if anything, themselves.
[removed]
> You seem to argue that you should let AI do it's thing, what it's good at, without interfering
Not necessarily, it's just that we have seen good results by letting compute dominate over interference. If other approaches worked better then we would be doing that.
Maybe in a parallel universe they valued creative approaches over quick results and human interference was valued more, and eventually got far better results but not in this one.
You show very specific issues that if you count all of them, is influence.
Username912773 t1_j94x2hm wrote
Ok what’s your point? How do you propose to fix this?