reconditedreams
reconditedreams t1_j6vsju1 wrote
Reply to comment by starstruckmon in Former Instagram Co-Founders Launch AI-Powered Personalized News App, Artifact by Flaky_Preparation_50
I don't think this is possible. There is some amount of inherent bias in all media including news articles. What information is deemed relevant enough to report on in the first place, what details are included in a story, the wording and grammar used to describe the events, etc these are all influenced by normative factors.
Our conception of the "objective facts" will always necessarily be subjective.
reconditedreams t1_j285ng2 wrote
Reply to comment by Mental-Swordfish7129 in Is AGI really achievable? by Calm_Bonus_6464
This is a good point. I would recommened anyone interested in this difference between subjective understanding and functional understanding read "Blindsight" by Peter Watts. It's an interesting hard sci-fi novel which explores the nature of sentience and subjective awareness.
reconditedreams t1_j285a5h wrote
Reply to comment by Mental-Swordfish7129 in Is AGI really achievable? by Calm_Bonus_6464
Yeah, this is my entire point. I often see people mistake the metaphysics question for the engineering question. It doesn't really matter if we understand the metaphysics of human qualia, only that we understand the statistical relationship between human input data(sensory intake) and human output data(behavior/abilities).
It's no more nessecery for ML engineers to understand the ontology of subjective experience than it is for a dog catching a ball in midair to have a formal mathematical understanding of Newton's laws of motion. They only need to know how to jump towards the ball and put it in their mouth. How the calculus gets done isn't really important.
Midjourney probably isn't capable of feeling sad, but it certainly seems to understand how the concept of "sadness" corresponds to pixels on a screen. Computers may or may not be capable of sentience in the same way humans are, but there's no reason they can't understand human creativity on a functional level.
reconditedreams t1_j2831tc wrote
Reply to Is AGI really achievable? by Calm_Bonus_6464
It's a fallacy to say that we need to fully understand the processes underlying human consciousness in order to accurately emulate the function of human consciousness to a good enough degree so as to be practically indistinguishable.
Obviously computers are nothing like a human brain, they're two completely different kinds of physical systems. One is made of circuit gates and silicon, the other is made of carbon and neurons.
Computers are also nothing like the weather, but that doesn't mean we can't use them to emulate the weather to a close enough degree so as to be practically useful for predicting stormfronts and temperatures.
We don't need to fully understand how human consciousness works in order to have AGI. We only need to quantify the function of human consciousness closely enough to practically mimic a human. To develop a decent statistical understanding of the input-output relationship of human consciousness.
It is reasonable to predict that modern digital computers will never be able to truly simulate the full depth of human consciousness, because doing so will require hardware more similar to the brain.
It is not reasonable to say that they will never come close to accurately predicting and recreating the output of human consciousness. This is frankly a ludicrous claim. The brain is a deterministic physical system and there is nothing magical about its output. There is no inherent reason why human behavior cannot be modelled algorithmically using computers.
The hard problem of consciousness, the philosophical zombie, the chinese room, ect these are all totally irrelevant to the practical/engineering problem of AGI. You shouldn't mistake the philosophical problem with the engineering problem. Whether an AGI running on a digital computer is truly capable of possessing qualia and subjective mental states is a problem for philosophers to deal with. Whether an AGI running on a digital computer can accurately emulate the output of the human brain to a precise degree is an altogether different question.
reconditedreams t1_j27xtwm wrote
Reply to comment by CandyCoatedHrtShapes in How are we feeling about a possible UBI? by theshadowturtle
I 100% saw that coming. He was always a rightwing shill in disguise.
reconditedreams t1_j27xrt7 wrote
Reply to comment by [deleted] in How are we feeling about a possible UBI? by theshadowturtle
It's definitely not happening now. All we have are a handful of research stations which require an absurd amount of resources flown up from Earth. We don't have anything even close to sustainable space operations like fuel synthesis, mining, or factories in space.
reconditedreams t1_j27x8hs wrote
Reply to comment by Sashinii in When will AI make a movie for me? by NotANumber13
Saying text to video will be perfected in 2026 seems wildly optimistic to me.
Maybe text to image will be nearly perfected by then, but text to video is an entirely different ballpark.
reconditedreams t1_j27rrdz wrote
Reply to comment by unmellowfellow in An A.I. Pioneer on What We Should Really Fear by jormungandrsjig
You're looking at it the wrong way. AI will ultimately lead to the downfall of capitalism. The bet thing any anti-capitalist can do to speed up the demise of capitalism and the emergence of a new system is to encourage automation and AI development.
reconditedreams t1_j1surwv wrote
Reply to comment by GrayBox1313 in What will cheap available AI-generated images lead to? Video? Media? Entertainment? by Hall_Pitiful
You don't need actual sentience to emulate the functional output of sentience to a precise enough degree anymore than you need actual Windows to emulate the functional output of the Windows OS to a precise enough degree.
There's no reason in principle why the output of human sentience can't be emulated to a close enough degree to be almost indistinguishable from the real thing.
The actual hard problem of consciousness is totally irrelevant to the practical question of whether the function of consciousness can be emulated.
reconditedreams t1_j1ss8p6 wrote
Reply to comment by GrayBox1313 in What will cheap available AI-generated images lead to? Video? Media? Entertainment? by Hall_Pitiful
The whole autistic AI trope is completely unrealistic, art AIs like midjourney prove that emotional expression can in principle be captured algorithmically.
I think you're overestimating the degree to which "real" sentience and "real" self awareness are necessery to emulate the function of sentience and self awareness to a sufficiently precise degree.
reconditedreams t1_j1somom wrote
I would tread very carefully about how you think of "bias" and objectivity.
If you synthesize points of view from a variety of different news sources(left leaning, mainstream, right leaning) and leave out stories which are only being reported by one or two sources, that could still lead to a strong centrist/establishment bias. Being biased towards the middle is still a bias.
For example, after 9/11 the majority of news sources in the US were(knowingly or not) pushing US government propaganda about WMDs. A news AI trained to provide a synthesis of several different sources might've done the exact same thing if that's what most of the data pointss included in the synthesis are doing.
It's arguably better for a news source to declare biases openly than it is for a news source to pretend to be "bias free".
Rather than focus on being neutral, I think it's better to focus on factual accuracy and complex, in-depth analysis. The problem with CNN and Fox News isn't so much that they're biased as it is that they're often very sloppy with fact checking and pushing very oversimplified clickbait narratives.
There are "biased" sources which still consistently produce complex, factually accurate coverage and analysis of events, like Jacobin on the left or the Economist on the right.
reconditedreams t1_j70eg6z wrote
Reply to How long do you guys think it’s going to be before the eleven labs speech synthesiser source code gets leaked? by captainjake9
Most source code never gets leaked. Stable Diffusion has always been open source, nothing leaked.