redsparks2025
redsparks2025 t1_j7o8x1e wrote
Since I have been recently hearing more about ChatGPT I have been wondering if anyone has considered that maybe the Turing test is wrong or at least limited in scope and that an AI can never truly understand humans until an AI can have an existential crisis?
That existential crisis may give that AI an understanding of empathy .... or do worse by making it into a kill-bot or something like AM from Harlan Ellison's novel I Have No Mouth But I Must Scream.
I don't think anyone can give the current versions of ChatGPT or Cortana or Alexa an existential crisis, but then, how would one program that into these AI's or is it something that emerges unexpectedly as a byproduct of programming to become more and more intelligent, like a gestalt? Programming to become more and more intelligent may lead to self-awareness.
Well one thing is for certain, AI's are definitely giving us humans an existential crisis even though it is not part of their programming to do so. The next philosophical great works or insight may be provided by an AI.
redsparks2025 t1_j1b9abg wrote
Reply to Stoicism & Artificial Intelligence: Embracing an Age of Unimaginable Change by johngrady77
Nature's biological evolved artificial intelligence creating artificial intelligence.
The Asimov Cascade ~ Rick & Morty.
redsparks2025 t1_j1b8a4t wrote
Interesting article that identifies an issue but does not really provide a solution, but only sows division. I don't pretend to be an expert in anything and therefore my inquiring mind treads wherever it wants. However when asked my views I would be honest and state I am no expert in [Insert Topic]. So what is the solution? Being honest with oneself and humble.
If you cross into my lane, indicate first your intentions and be prepared to give way.
redsparks2025 t1_iwe9gm7 wrote
Reply to Why liberals cannot escape intolerance by ThomasJP1983
There are a lot of empty words in the article as it relies on a perceived stereotype of what it is means to be "liberal".
Identifying people using their "Faith" to argue against "XYZ" does not fully try to understand what their real fears or concerns are.
Religion just like optimism (or hope) can cut off critical thinking and become an excuse to not think too deeply about death.
Also your own bias maybe (maybe) interfering in trying to understand the other person's opposing bias (or point of view) in any meaningful way and even relating to the other person as a fellow human being.
As I said at the begging there is a strong reliance of a perceived stereotype of what it is to be "liberal". Therefore the article maybe (Maybe) creating a straw man argument.
The Different Kinds of Straw ~ Sam O'Nella Academy ~ YouTube.
redsparks2025 t1_j7rmeej wrote
Reply to comment by Maximus_En_Minimus in /r/philosophy Open Discussion Thread | February 06, 2023 by BernardJOrtcutt
I like your comparison to the trans-movement. Philosophy can all preempt these scenarios through thought experiments, such as the small example you provided, instead of leaving it up to science fiction writers.