Submitted by Dr_Singularity t3_yk4ono in singularity
[deleted] t1_iutqjzr wrote
Reply to comment by ninjasaid13 in Meta's newest AI determines proper protein folds 60 times faster by Dr_Singularity
> I'm not seeing any revolutionary technology with the metaverse
You don't think real-time, 3D, social experiences are more compelling, useful, and better than 2D ones, or that developing those doesn't require technology that will be revolutionary? You can look at some of the stuff they're doing, and willing to show right now, and it looks reasonably compelling to me - Codec Avatars, high resolution scanning of objects into VR/AR, etc.
> I'm seeing separate platforms that do that better like vrchat.
There's a significant difference between a piece of software that's designed to leverage only existing hardware and software capabilities to let people use them to have "voice chat with avatars", and the kind of hardware and software work that a first-party headset company like Meta can spend billions of dollars developing. "VR Chat" doesn't have the financial leverage to drive the development of VR as a technology long-term, it just uses whatever exists already to make a proof-of-concept (anarchic) social experience, which can really only go as far as Unity and existing hardware can. It doesn't exist at all without other, much larger, companies making all the hardware, APIs and engines for them to use (and then actually letting them use them). It (correctly) exploited the market opportunity that was created when none of the companies that released headsets launched with a compelling first-party social experience, but I virtually guarantee that it disappears altogether once these companies start to devote financial firepower to competing with them for mindshare, because nobody that makes a headset is going to eschew a first-party social experience ever again, especially as in-headset cameras for face-tracking become the norm.
Eventually, when the entire space is more mature, there will probably be interest in an "open social platform" again, but I don't expect early competitors like VR Chat to be able to keep up as the space rapidly progresses and fragments over the next few years, and as more platform are added (notably, Apple). I expect we'll have a large number of 'walled gardens' develop and diverge, and then eventually reconverge toward open platforms, when the business opportunity becomes large enough to attract talent and major investment, as with social media in the 2000s.
> I'm not really sure who it's for.
I agree, I don't think Meta has articulated their vision well. That said, I think VR today is basically a "dorky precursor", with bad UX and palatable primarily to technology enthusiasts, to the VR of tomorrow, in the way that BBS/Usenet/IRC were the dorky precursors to the version of the internet that exists today, and that has a UX that is palatable to everyone.
ninjasaid13 t1_iutuvgh wrote
I watched the video and it seems they have a lot of cool technology but unfortunately it seems that none of it was actually used and what they showed didn't wow anyone, if it was me in charge I would use some of the technology shown in the video in the actual metaverse to impress and build hype instead of what we got which in many cases is worse than the technology we have today.
I can't imagine the connection between what we got and what they have in the labs.
[deleted] t1_iuu6k35 wrote
I think most of what they demoed is in the phase of "technically possible, but not consumer-ready yet".
Like, Codec Avatars. They initially accomplished 1.0 with a big camera-sphere. Neat, but not practical. We can't have every person visit a commercial camera-sphere to get an avatar.
So then they figure out how to do it in a way similar to FaceID - take a video of your face from a bunch of sides with a smartphone, and then do a bunch of photogrammetry post-processing on it, and build a map of the user's face. Consumers can do that with devices they have today. I think they've still said it takes many hours of processing, and then Codec 2.0 still requires the elongated headset they showed the other man using to animate their mouth properly, but I think that's what's coming for consumers in the future, and now that they're sure it's technically possible, they can start to optimize toward that very desirable endpoint, to achieve this result more quickly and easily.
Now, they also have to combine this stuff with high-res environments, to avoid this being too uncanny; you don't want your high-res avatars in a cartoon environment. So this is where item scanning comes in. Starts small, same basic technology as face-scanning, but ends with a user being able to digitally import a whole room, or an intersection of a major city, or whatever.
Luckily, game engines and hardware are "cooperating" with this timeline. You can look at Unreal Engine 5 demos, like Matrix City or the Train Station to see where that will be in the near future. Intel and Nvidia are constantly out there showing new "real-time raytracing" demos (for example, and this) as lighting continues to be optimized as well.
> I can't imagine the connection between what we got and what they have in the labs.
If I was to hazard a guess, it's partly them struggling to normalize/introduce it to people, and partly producing an MVP so they can observe how people 'use' it, and iterate as they discover what the real sticking points of the tech are. I think everyone knows that VR has an "input mechanism problem", in a number of places, and you can see them moving toward fixing it.
From a "hands" perspective, they introduced tracked controllers as the obvious MVP, but they're clearly also examining what the minimum necessary hand tracking is to allow a user complex and useful input options, in a way that's unobtrusive and intuitive, using on-device processing of small motor movements.
You instinctively want to "move" in VR, but this isn't compatible with the average person's real environment. If you virtualize movement, you end up with an inner-ear disconnect, and this makes people sick. Many companies, including Meta, are choosing native AR as medium short-term solution, to marry the virtual and real environments together, so the user can navigate their real environment safely, since nobody but the enthusiasts are willing or able to have a "VR room" to facilitate safe movement.
Viewing a single comment thread. View all comments