hucktard t1_j7rg8gg wrote
How hard is hard? I mean how fast does artificial super intelligence (ASI) have to appear to be a hard takeoff? Within 1 year? 1 month? 1 day? I think its also possible to have a somewhat narrow ASI, like an AI that is super smart at most things but still very limited at other tasks that humans do. In fact I think that is the likely scenario and we actually already have very limited versions of that.
I don't think we will have a super hard takeoff, like a godlike ASI that appears almost instantly. I think the rate of advancement could be super quick though. Like we have really impressive but not completely general AI within the next year or two and then over the next few years advancements are mind blowing and world changing, but there will be no god like AI that appears overnight and suddenly rules the world.
oOMaighOo t1_j7rn8y0 wrote
I agree
I am starting to thing that achieving AGI is wildly overrated with what we are seeing just from GPT3.5. That is already a very powerful tool and GPT4 and other more advanced LLM are just around the corner. The way it is looking right now they might just turn the world upside down in a way that very much resembles the singularity
Also, it's not just the technology that's impressive, but especially the rate of adoption. It's like everyone has been waiting for a prompt/interface that is usable by non-expert users.
throwaway764586893 t1_j7tbp3c wrote
It would have to be godlike to stop my worsening health conditions.
sumane12 t1_j7th7xg wrote
I'm of a similar mind. We already have narrow super intelligent AI, I don't think a godlike super AI will appear instantly either, but I do think that the first AGI will be ASI. How can it not? Speed of light thinking, ability to search the web instantly, no need to eat or sleep, ability to copy itself multiple times to work on multiple tasks. I think a fast takeoff is inevitable, I mean we already have a super intelligent assistant in the form of ChatGPT, that will only improve.
That being said, I don't think the recursive self improvement will be immediate, I think it will be quick, but will still take a few years from AGI to see and end to human invention and the godlike AI that we think will be the result. It's also not clear to me at what point we will merge with AI, and what will be the outcome of that, it may well be that we become the ASI.
BenjaminHamnett t1_j7uhjmm wrote
I always assumed some one or cyborg society would merge with the AI. It may come down to arbitrary semantics to describe what happens.
I always assumed a combined cyborg hive would always be stronger than AI alone. The last human creators would have more incentive and more capability than (relatively) detached programmers, if there could even be such a thing, considering anyone reading this today is already essentially a cyborg.
That AI is writing code already is what skews the odds a bit now. it becomes a bit more likely someone will give a detached AI enough computing power to use evolutionary programming to bootstrap a sci-fi singularity. I still think this is less likely than a neural implant cyborg hive mind singularity, but the odds are approaching 50:50, where before I thought it was more like 90:10 to be cyborg based instead of straight hardware.
Viewing a single comment thread. View all comments