fingin
fingin t1_j3itqrr wrote
Reply to comment by TheAxiomOfTruth in Anna Alexandrova, a philosopher of science at Cambridge, argues that a “science of happiness” is possible but requires a new approach. Measures such as “life satisfaction” or “positive emotions” can be studied rigorously. An underlying variable of “happiness” cannot. by Ma3Ke4Li3
Yeah I think it's better to focus on minimizing unwanted outcomes, is in line with some of the more compelling versions of Utilitarianism
fingin t1_j3immki wrote
Reply to comment by TheAxiomOfTruth in Anna Alexandrova, a philosopher of science at Cambridge, argues that a “science of happiness” is possible but requires a new approach. Measures such as “life satisfaction” or “positive emotions” can be studied rigorously. An underlying variable of “happiness” cannot. by Ma3Ke4Li3
I feel like it's the same issue, just using different words. For example, the concept of suffering extends well beyond things like physical and economic needs. It's like happiness in how difficult it is to actually assess it as its own quality. But I do see the value in minimizing these associated things rather than trying to maximize things like "life satisfaction rates"!
fingin t1_j1ghurj wrote
Reply to Hype bubble by fortunum
I think it's just a feature of Internet social media (and maybe really any large-scale community platforms), that there will be a lack of nuance, caution, critical thinking, statistics & probability, in discussion. I'm sure there are some better Subreddits for this.
fingin t1_j1fcumg wrote
Reply to Am I the only one on this sub that believes AI actually will bring more jobs (especially in tech)? by raylolSW
It is already creating more and more jobs. It's actually unclear what a reasonable upper bound is for the number of new job titles it could create is, but the lower bounds is 100s. That's just the roles, not number of jobs created. and the demand for those jobs will vary but on the whole, demand for ML skills is increasing and now with the advent of GPT and Diffusion models, I expect this will shoot up over the next year or so.
I guess with this subreddit you just have a lot of people convinced the literal singularity is here (AGI) and so the way they see it, every person is replaceable. I don't think this is going to happen in the next few years personally, maybe another decade or so.
fingin t1_j031gnr wrote
Reply to comment by Relative_Rich8699 in Character ai is blowing my mind by LevelWriting
Even GPT-4 will make silly mistakes. That's what happens when a model is trained to find probable word sequeces instead of actually having knowledge of language like people do.
fingin t1_j03181d wrote
Reply to comment by katiecharm in Character ai is blowing my mind by LevelWriting
I asked the character.ai bot what model it used it told me, T5. Insisted even. Regardless of the veracity of this, all of these models use tranformer-based architecture, with improvement between versions of models being due to more parameters (and correspondingly larger and higher quality training data sets). Crazy to think in two months we might be at GPT4 level and laugh about this tech we are blown away with today
fingin t1_izs4xds wrote
Reply to comment by Sieventer in This subreddit has a pretty serious anti-capitalist bias by Sieventer
Who lobbies politicians, manipulates foreign policy, and funds the campaigns to get them in power in the first place...
fingin t1_izqhmq3 wrote
Reply to comment by tracertong3229 in This subreddit has a pretty serious anti-capitalist bias by Sieventer
I've found Chomsky usually just dismantles people's use of the terms, such as socialism, as a way to illustrate how pervasive propaganda is. I am curious to check out Fisher
fingin t1_izqhfgb wrote
How is a politician influencing things any different to a private company doing it? Why is it better?
fingin t1_izqgvsh wrote
Reply to comment by ryusan8989 in This subreddit has a pretty serious anti-capitalist bias by Sieventer
You are misusing the term Capitalism. What you are referring to is closer to idea of a "market economy". There is also nothing exclusive about having competitive businesses and socialism.
fingin t1_izqglot wrote
Reply to comment by threeeyesthreeminds in This subreddit has a pretty serious anti-capitalist bias by Sieventer
I don't agree it's inherently evil but, at the very least, usually unfair in theory, and always unfair in practice.
fingin t1_izqggor wrote
Reply to comment by MattSpokeLoud in This subreddit has a pretty serious anti-capitalist bias by Sieventer
Yeah and if you examine these infamous examples of "failed socialism", you usually just see that most people simply don't have a great grasp of history and political discourse. For example, people often point to the Soviet Union as an example of socialism's failures- the Soviet Union, where people had no control over the means of production and were repressed under a facist police state.
fingin t1_izqfcua wrote
Reply to comment by numberbruncher in This subreddit has a pretty serious anti-capitalist bias by Sieventer
Sorry what proof do you have it's the best system right now? Can you give me an example of a successful capitalist country? Even the US can hardly be said to be a "capitalist" country (see government subsidies, federal bank, social security, medicade). And last I checked, the US doesn't have such a great system, if wealth inequality, health and violent crime rates are important to you. Even if you do think the US has the best-system, that conventiently ignores the likes of other "capitalist" (capitalist-leaning) countries like Brazil.
So again, what capitalist country has a succesful system? Or are you just confusing the theory of capitalism with other concepts like a market economy?
fingin t1_izqecpe wrote
Reply to comment by tracertong3229 in This subreddit has a pretty serious anti-capitalist bias by Sieventer
Can recommend these "How the World Works" & "Manufacturing Consent" by Noam Chomsky, "Justice: What's the Right Thing to Do?" by Sandel
fingin OP t1_iypzw2v wrote
Reply to comment by [deleted] in Idea that AI requires samples where a human brain doesn't by fingin
Please. read. posts. before. you. reply. to. them
Submitted by fingin t3_zb6f72 in singularity
fingin t1_iynncyq wrote
Reply to Is my career soon to be nonexistent? by apyrexvision
"I can see the advancements as augmentation and will assist with making me more effective for 10-15 years. From my point of view it'll be like being a manager of 5 or developers which I'll maintain, support, and utilize. "
This is apt. You will learn new skills with new tools, combining strengths from different ones. You can leverage other disciplines to produce higher quality or novel results, be it in art, research or work. Machine learning applications are an interface to powerful expressions of language and visualization. In the future it could go beyond, but humans will also be doing some pretty amazing things with access to this interface, so let's not be too fearful just yet.
fingin t1_iutkgi6 wrote
Reply to Scientists Increasingly Can’t Explain How AI Works - AI researchers are warning developers to focus more on how and why a system produces certain results than the fact that the system can accurately and rapidly produce them. by Kujo17
It's quite a bold claim, as scientists and ML engineers are also working on making simpler models (for example, compare GPT-Neo to GPT-3, or Stable Diffusion to Dalle-2), building interpretability methods (such as SHAPley), and pushing forward systems that focus on using extracted covariates from models as a source of insight for decision-making, instead of using the algorithmic itself to make the decision. Who knows what approach will approach will be dominant when "true" AI emerges.
fingin t1_iutjybb wrote
Reply to comment by sumane12 in Scientists Increasingly Can’t Explain How AI Works - AI researchers are warning developers to focus more on how and why a system produces certain results than the fact that the system can accurately and rapidly produce them. by Kujo17
Good points! Yes, AI models are prone to racial and gender bias, but the presence of bias is largely due to human behaviours leading up to the model's creation. As above, so below.
fingin t1_iutjd7v wrote
Reply to comment by ChurchOfTheHolyGays in Scientists Increasingly Can’t Explain How AI Works - AI researchers are warning developers to focus more on how and why a system produces certain results than the fact that the system can accurately and rapidly produce them. by Kujo17
It depends what you mean by AI. If you mean state of the art technology most people are referring to as AI (i.e deep learning models), then we might want to bound the limits of AI because we know how sensitive it is to "mistakes" such as a data and concept drift.
On the other hand, if you mean some conceptual AI that is different from current technology in a meaningful way, then I think I see your point. The problem with the discourse today is no distinction between these two things, one which exists today and the other that could appear anywhere from months to centuries from now.
fingin t1_j68hdlp wrote
Reply to comment by Talkat in Google not releasing MusicLM by Sieventer
Not that I necessarily agree with OP but: