Viewing a single comment thread. View all comments

userbrn1 t1_iwni7hk wrote

I would caution against thinking this brings us close to FDVR; there is a fundamental difference between encoding and decoding neural patterns.

Progress like this is in decoding, meaning we take the neural patterns and try and determine what the person was trying to envision. This is analogous to neuralink monkeys playing pong with their minds or people controlling robotic limbs with their thoughts. We are making lots of progress in this area, and we can afford to be imprecise. For example, with a robotic limb, it's ok if your elbow bends 50 degrees instead of 49 degrees for the vast majority of tasks. Even fine motor tasks like writing have a point at which further precision is no longer useful. We also have the benefit of being able to measure the end result easily; we can measure the movement of limbs or the accuracy of cursor movement controlled by the mind.

Encoding would involve figuring out what signals we can put in in order to recreate a specific conscious experience; it is the opposite process of the above. Full dive VR would require us to master the ability to send a signal that gets interpreted by our brain in a specific way. For example, if you're on the beach on a windy day, you'd need to find a way to send a signal so precise that your brain truly interprets it as its own vision, which is incredibly complex. You'd need to find a way to simulate the very complex sensation of wind blowing across your arms, moving your clothes in certain ways, specific hair cells. You have likely never felt the same gust of wind twice, because of how rich our conscious experiences are. In contrast to decoding, encoding has orders of magnitude smaller room for error; if the sensation on your skin is even slightly off, you'll realize it's fake and weird. We also cannot measure the end result of encoding easily at all, since the end result is a conscious experience; imagine trying to describe in words to a researcher that it feels as though your proprioceptive sense of where all your limbs are in space relative to each other feels kind of off. The only way to actually empirically iterate would be to first master decoding, build up a massive database of decoded human experiences, and then simulate trillions of fake walks on the beach into a human brain hoping to get neural signals that, when decoded, are close to perfectly in line with the empirically derived decoded data from real human experiences. This is of course impossible to test on real humans and would likely require server farms lined with millions human brains in jars which, by definition, would have to be sentient and conscious in order to have it be relevant to our own conscious experience.

tl;dr it's good that we're getting better at decoding neural data but it is an entirely different problem from the encoding that FDVR requires. In my opinion we do not have a viable pathway to FDVR due to our inability empirically test neural encoding at the scale and precision needed to make FDVR worth doing.

10

Kaarssteun t1_iwnkbl5 wrote

Of course this is not magically enabling fdvr. First step to encoding neural patterns is understanding how to decode them, that's what i'd like to stress here. Haven't seen any works this coherent, and I'm excited!

24

Shelfrock77 t1_iwo0czy wrote

The person who replied to you is over complicating things. Encoding and decoding share a monotonous relationship that sometimes it can be overlooked and taken for granted. Our consciousness is like VR and it’s proven that synthetic data can provide far more data for AI and humans to use in future artifical neural networks. First we get text to image for the brain, then we compile up “time screenshots” to make text to video then once we get text to 3D image, text to 3D video, reality will basically feel 100% blended. The singularity will unlock our lucid dreams, something our ancestors would drool over. To live in the dream realm again. To make it simple, we are plugging our biological instruments into the same frequency “wirelessly” (but still wired just invisible to our eyes) into our computers for us to interpret back. We give the computer a command and it streamlines to another computer (being our consciousness) to interface with it. That’s why in the “old” days, when they said someone cast a “spell”, it’s referring to spelling words out into the keyboard/ or whatever your using to remote control someone. Imagine a little cute sim falling under a spell and you pop up in their world through a portal, they’ll be so brainwashed with religion, they’ll think you cast a spell/possessed them because they disregard science and give more meaning into magic. To program, to brainwash, to be under a spell all mean the same thing. This was my epiphany when I was on dmt. We are always programmed even when we think we aren’t. Free will is an illusion, why I say this is because multiverse “theory”. Natural and synthetic are illusions. It doesn’t matter if you are in a simulation, it only matter that you exist. When you get killed in a video game, you just respawn or what we would call reincarnate. Ik I sound like Alan Watts right now lol, anyways back to playing MW2 warzone.

5

AI_Enjoyer87 t1_iwo3dhc wrote

Magical rambling Shelfrock! What's your timeline predictions? 😈

8

Shelfrock77 t1_iwo8foc wrote

FDVR will probably be on the market anywhere between 2025-2028 for first generation (I put the deadline at 2030 nonetheless). As for when we get new bodies, that happens when we biologically die. Once minduploaded, your decision decides your fate, you can choose to stay in the computer as a “virtual being with a body” and not have a “real” body or you can choose a “real” body just like you choose a car at a dealership. I mean, I don’t think it’s far fetched to say that we can print out sex bots of all kinds with its own synthetic genes? Just like how we customize our characters in a video game perhaps? ASI may be able to help us with that, it’ll be like in cyberpunk 2077 where you can customize your hardware/mechanical biology. It’s like Los Santos Customs but for your vessel haha. Once we sync synthetic and natural data, reinventions will occur quickly in this solar system. We reinvent god/universes/existence/consciousness/soul.

6

BinyaminDelta t1_iwofjoh wrote

2025 is two years away. We're at "monkey playing pong" currently.

Can it accelerate? Yes, but there are many, many tech and bio problems that need to be solved and then perfected before FDVR.

I admire your optimism but wouldn't be surprised if Neuralink (or equivalent) takes five more years to be usable, and then another five to ten more to reach FDVR level.

AGI could accelerate that timeline if it is able to show us a biotech path we're not seeing. AGI also needs to exist first.

7

Shelfrock77 t1_iwog6jn wrote

That’s why I said 2025-2028 with 2030 being a deadline. My flair is a quote from a club full of billionaires addressing their plans. A privatized united nations known as the World Economic Forum. I personally think it will happen from 2025-2028 but it can happen 2029 or 2030. I honestly don’t think we have to wait any much longer from the way things are headed and it’s only 2022. We are going through the 4th industrial revolution era right now.

5

Redvolition t1_iwq2rwi wrote

Don't think encoding is going to be all that difficult. Once we figure out how to record signals traveling through nerves non-invasively, all we would have to do is install the tech on a few dozens of people, then run them through stimulus sets, logging the correlations. Machine learning would do the rest.

2

userbrn1 t1_iwqb63y wrote

How would we log those correlations? The end result we are trying to achieve is a conscious experience; we cannot directly measure that, so I'm not sure what data we would put into the machine learning model lol

1

Redvolition t1_iwr6f4o wrote

Take the vestibular sense, for example:

Step 1: Intercept nerve signals to projection pathways via implant.

Step 2: Put human on motion capture suit.

Step 3: Run human through a variety of motions and positions.

Step 4: Correlate motion capture data with nerve signals.

1