fingin

fingin t1_j68hdlp wrote

Reply to comment by Talkat in Google not releasing MusicLM by Sieventer

Not that I necessarily agree with OP but:

  1. Papers are great promotion. Think of all the buzz that has now been created for Google MusicLM by only releasing their paper. Also, now that the paper is out, problems or limitations can be addressed by other researchers that will ultimately help Google. Really, the information/theory behind the model is not that important compared to an actual product or tool served.
  2. Agreed!
4
10

fingin t1_j3immki wrote

I feel like it's the same issue, just using different words. For example, the concept of suffering extends well beyond things like physical and economic needs. It's like happiness in how difficult it is to actually assess it as its own quality. But I do see the value in minimizing these associated things rather than trying to maximize things like "life satisfaction rates"!

29

fingin t1_j1ghurj wrote

Reply to Hype bubble by fortunum

I think it's just a feature of Internet social media (and maybe really any large-scale community platforms), that there will be a lack of nuance, caution, critical thinking, statistics & probability, in discussion. I'm sure there are some better Subreddits for this.

5

fingin t1_j1fcumg wrote

It is already creating more and more jobs. It's actually unclear what a reasonable upper bound is for the number of new job titles it could create is, but the lower bounds is 100s. That's just the roles, not number of jobs created. and the demand for those jobs will vary but on the whole, demand for ML skills is increasing and now with the advent of GPT and Diffusion models, I expect this will shoot up over the next year or so.

I guess with this subreddit you just have a lot of people convinced the literal singularity is here (AGI) and so the way they see it, every person is replaceable. I don't think this is going to happen in the next few years personally, maybe another decade or so.

0

fingin t1_j03181d wrote

I asked the character.ai bot what model it used it told me, T5. Insisted even. Regardless of the veracity of this, all of these models use tranformer-based architecture, with improvement between versions of models being due to more parameters (and correspondingly larger and higher quality training data sets). Crazy to think in two months we might be at GPT4 level and laugh about this tech we are blown away with today

1

fingin t1_izqggor wrote

Yeah and if you examine these infamous examples of "failed socialism", you usually just see that most people simply don't have a great grasp of history and political discourse. For example, people often point to the Soviet Union as an example of socialism's failures- the Soviet Union, where people had no control over the means of production and were repressed under a facist police state.

8

fingin t1_izqfcua wrote

Sorry what proof do you have it's the best system right now? Can you give me an example of a successful capitalist country? Even the US can hardly be said to be a "capitalist" country (see government subsidies, federal bank, social security, medicade). And last I checked, the US doesn't have such a great system, if wealth inequality, health and violent crime rates are important to you. Even if you do think the US has the best-system, that conventiently ignores the likes of other "capitalist" (capitalist-leaning) countries like Brazil.

So again, what capitalist country has a succesful system? Or are you just confusing the theory of capitalism with other concepts like a market economy?

5

fingin t1_iynncyq wrote

"I can see the advancements as augmentation and will assist with making me more effective for 10-15 years. From my point of view it'll be like being a manager of 5 or developers which I'll maintain, support, and utilize. "

This is apt. You will learn new skills with new tools, combining strengths from different ones. You can leverage other disciplines to produce higher quality or novel results, be it in art, research or work. Machine learning applications are an interface to powerful expressions of language and visualization. In the future it could go beyond, but humans will also be doing some pretty amazing things with access to this interface, so let's not be too fearful just yet.

1

fingin t1_iutkgi6 wrote

It's quite a bold claim, as scientists and ML engineers are also working on making simpler models (for example, compare GPT-Neo to GPT-3, or Stable Diffusion to Dalle-2), building interpretability methods (such as SHAPley), and pushing forward systems that focus on using extracted covariates from models as a source of insight for decision-making, instead of using the algorithmic itself to make the decision. Who knows what approach will approach will be dominant when "true" AI emerges.

2

fingin t1_iutjybb wrote

Good points! Yes, AI models are prone to racial and gender bias, but the presence of bias is largely due to human behaviours leading up to the model's creation. As above, so below.

5

fingin t1_iutjd7v wrote

It depends what you mean by AI. If you mean state of the art technology most people are referring to as AI (i.e deep learning models), then we might want to bound the limits of AI because we know how sensitive it is to "mistakes" such as a data and concept drift.

On the other hand, if you mean some conceptual AI that is different from current technology in a meaningful way, then I think I see your point. The problem with the discourse today is no distinction between these two things, one which exists today and the other that could appear anywhere from months to centuries from now.

2