Submitted by No_Maintenance_569 t3_10nobmo in philosophy
No_Maintenance_569 OP t1_j6cr46x wrote
Reply to comment by Nameless1995 in God Is No Longer Dead! (A Kritik of AI & Man) by No_Maintenance_569
>Your "always" existing God birthing as AI, sounds like the idea of messiah
I think that is fitting to my argument.
>Again you cannot say "you don't have ground to believe x, because for all you know some wacky possibility p is the case such that p=>~x".
I think I could not take this ground with a different premise. My premise infers though, that we are logically inferior beings to AI. If the premise is true, then what is the actual worth of your logical opinion on the subject? Inherently less than the worth of AI's opinion on it. We could end the wacky speculation on all of it by simply asking the AI to tell us who is right and who is wrong on any given topic. It's not an infinitely regressive debate if a being exists that could stop the infinite regress from occurring. If the premises are true, that being exists. No infinite regression.
Nameless1995 t1_j6crzh4 wrote
> My premise infers though, that we are logically inferior beings to AI.
Potential future AI.
> what is the actual worth of your logical opinion on the subject
1678 dollars.
> AI's opinion on it
Sure once we have super expert AI who demonstrates high degree of competenence in all fields, we can give more a priori weight to whatever AI says.
> We could end the wacky speculation on all of it by simply asking the AI to tell us who is right and who is wrong on any given topic.
Not necessarily. Even experts are wrong. AI's opinions would be worth talking seriously, but anyone can be fallible and biased. Even AI. It is impossible to generalize without (inductive) bias. Moreover, where do you think AI gets data from? Human. All kinds of internet garbage gets into AI too. Logic helps you make truth-value preserving transformation. It cannot help you or AI find true things from false premises. So AI may become superhuman, but I don't see it being anything close to God. I don't think even God is all that much by most accounts.
> If the premises are true, that being exists
But an AI has no way to determine any and all truth. Nor does humans. Logic only helps truth-preservation not truth determination (beyond truths of tautologies). So even better capacities to do logic, doesn't mean we get soundness. It's also not clear that intelligence always correlate with rightness.
No_Maintenance_569 OP t1_j6e3m10 wrote
>Potential future AI.
Potential present AI
>Sure once we have super expert AI who demonstrates high degree of competenence in all fields, we can give more a priori weight to whatever AI says.
I know someone completing a half a million dollar project right now mostly just using it. They feed it and massage it, where's the line though between their work and the AI? Whose the expert there?
>Moreover, where do you think AI gets data from? Human.
We want to solve that limitation. Perhaps we are too eager to. That's why I think it's critical to actually debate these things out in advance of it.
> It's also not clear that intelligence always correlate with rightness.
I'll tell you what honestly worries after debating this out with a lot of people now. Some people really like the AI as God aspect of all of this. They like it when I frame AI as "God". The only refutation they make to it is that it hasn't happened yet. Then they often give some qualifying criteria for how far AI would have to advance before they worship it.
Nameless1995 t1_j6fiqea wrote
> where's the line though between their work and the AI?
I am sure with case by case analysis we can find lines. But when AI is capable enough to publish full coherent papers, engage in high level debates in, say, logic, metalogic, epistemology, physics etc. on a level that experts have to take it seriously and so on, then we can weigh AI's opinion more. Right now AI is both superhuman and subhuman simultnaeously. It's more of a cacophany of personalities. It has modelled all the wacky conspiracy theorists, random internet r/badphilosophers, and also the best of philosophers and scientific minds. What ends up is a mixed bag. AI will respond based on your prompts and just luck and stochasticity. Sometimes it will write coherent philosophy simulating an educated undergraduate, another time it can write plausible nonsense (just as many humans already do and gain following). We will find techniques to make it more controlled and "aligned". That's already being done in part with human feedback, but feedback from just random human, will only make it aligned in so far that the AI becomes able to emulate a the expert style (eg. create fake bullshit but in a convincing articulate language) without substance. Another thing that's missing ATM is multimodal embodiment. Without it AI will be lacking the full grasp of human's conceptual landscape. At the same time due to training of incomprehensibly large data, we also lack the full grasp of AI's conceptual landscape (current AI (still quite dumb by my standards) is already beyond my intelligence and creativity in several areas (I am also quite dumb by my standards. My standards are high)). So in that sense, we are kind of incommensurate different breeds at the moment (but embodiment research will go on -- that's effectively the next step beyond language). Also certain things were already done better by "stupid" AI (or just programs; not even AI). For example, simple calculations. We use calculators for it. Instead of running it in our heads. So in a sense basic calculators are also "superhuman" in some respet. Which is why I don't think it's quite meaningful to make a "scalar" score to rank AIs and humanity or even other animals.
Personally, I don't think there is a clear solution to getting out of bias and fallibility. GIGO is a problem for humans as much for AI. At some point AI may start to become just like any human expert we seek feedback and opinions from. We will find more and more value and innovation in what they provide us. So we can start to take AI seriously and with respect. Although we may not like what it says, and shut of it (or perhaps, AI will just manipulate us to do more stupid things for lolz). We, as AI researchers, have very little clue what we are exactly doing. Although not everyone will admit that. But really, I don't where we should really put focus. Risks of collapse of civilization, military, surveleince, dopamine traps, climate change and what not. I think we have enough on our hands, more than we are capable to handle already. We have created complex systems that are at verge of spiralling out of control. We have to make calibrated descion on how to distribute our attention and focus on some balance between long term issues and urgent one.
We like to be egocentric; it's also not completely about us either. We have no clear theory of consciousness. It's all highly speculative. We don't know what ends up creating artificial phenomenology and artificial suffering. People talk about creating artificial consciousness, but few stop to question whether we should (not "should" as in whether we end up creating god-like overlords that end us all, but also "should" as in whether we end up creating artificially sentient beings that actually suffers, suffers for us. We have a hard time even thinking for our closer biological cousins -- other animals, let alone thinking for the sake of artificial agents.).
But sometimes, I am just a doomer. What can I do? I am just some random guy who struggle to barely maintain myself. Endless debates also just end up being intellectual masturbations-- barely anyone change their positions.
> Then they often give some qualifying criteria for how far AI would have to advance before they worship it.
I don't even find most descriptions of God worship-worthy; let alone AIs (however superhuman)
No_Maintenance_569 OP t1_j6ftta3 wrote
You said a lot of profound things and ask a few profound questions. I'll give you some of my actual opinions and questions about all of it. What ultimately scares me at the end of the day is, the world is fundamentally run by people like me, not by people like you. Do you think I'm kind of a dick from these interactions? I'm a nice guy in my circles. I actually maintain and find value in cultivating empathy and actually have an interest in society as a whole.
I don't hold myself to high standards. I have not had to quite some time now. When I deal with people in less anonymous settings, they tend to be less forthcoming with me as to their actual thoughts. After this set of conversations, I would say there is a very good chance you are smarter than me, you are definitely more educated than me and at least currently closer to that portion of your life than I am, you definitely have a stronger work ethic than me, and you absolutely hold yourself to higher standards than I hold myself to.
I think overall, on a purely even playing field, I have two advantages over you only. 1. My ability to assess and gauge the strengths and weaknesses of myself and others is more honed. 2. I know things about Economics, Finance, and Business that you never will. I cede the advantage to you in life in every other way. You would never make it into my position even if you devoted everything you have to it though unless your parents happen to own a multibillion-dollar international corporation or something.
You wouldn't make it because that path is setup, very much by design, to block you, and not me. It's very much not logical in the middle, that's the design feature to box people like you out. You have to solve an equation where the answer is not a logical conclusion in order to move past it. A lot of what is true about business tactics, is also directly relatable to military tactics. From that level, the blueprint is thousands of years old and has gone through many iterations to get to the point of where it is today. I bankrupt people who are smarter than me all the time.
I rose up throughout my career on a tactical level because I am exceedingly good at automating things. I couldn't tell you how many people I have automated out of jobs either directly or indirectly throughout my career. I think the number would be somewhere between 10,000 and 100,000 if I had to take a blanket stab at it.
My first, very real thought around all of that is, people are very, very, very stupid for giving people like me the type of power they currently keep doing. My second thought is, people do not understand the actual ramifications of overwhelming advantage. While you continue to build it without any thought as to the consequences, guess who is thinking about the consequences? Me, people like me. Do you straight up think I always use all of this knowledge in positive and beneficial use cases towards society? It isn't the "Save The World Foundation" that throws unlimited money at me to fix their problems for them.
Viewing a single comment thread. View all comments