Submitted by Dr_Singularity t3_yk4ono in singularity
ninjasaid13 t1_ius8fwt wrote
Reply to comment by Black_RL in Meta's newest AI determines proper protein folds 60 times faster by Dr_Singularity
META should abandon metaverse and go into AI.
was_der_Fall_ist t1_iusadw2 wrote
My understanding is that Meta views AI as essential to the success of the metaverse, and they thus are heavily investing in both.
[deleted] t1_iute4od wrote
Two points:
a) As others have said, I think they view these as interrelated things, and they both involve a high-level of "hardtech" R&D. Facebook is one of (if not the largest) purchaser and user of GPUs in the US, and they recently talked about quintupling the number of GPUs they have in datacenters.
b) The negative opinion of Facebook's "Metaverse" initiative is coming from three groups:
-
Shareholders. They'd rather Meta just find a way to turn the ad revenue taps back on, and stop spending money on R&D for stuff that doesn't make money right now. This is.. probably shortsighted, that status quo is permanently gone. They can either pivot, and find a way to own a new hardware platform, or pretend that status quo hasn't changed for the next 10 years, and then eventually end up like Yahoo!, AOL, MySpace, etc.
-
Average people who don't like Zuckerberg/Facebook. Doesn't matter what Meta does, they'll dislike it. I'd argue this metaverse stuff is the least useless or toxic work that's happened at Meta in a long time, I'm fine if they spend shareholder money on this over A/B testing whatever new dark pattern they've discovered to manipulate the dopamine of teenagers.
-
People who are seeing MVPs and extrapolating to the finished product on the basis of that. The stuff that's out there is just what exists today. The stuff they're clearly planning to do is way more ambitious. I may not be a user, but they're training hundreds of SWEs that will eventually go start competitors, and build the next version of the internet. People who think this is dumb and won't be very important are like the people that didn't understand the internet in the early 90s.
ninjasaid13 t1_iutg1st wrote
I'm not seeing any revolutionary technology with the metaverse; if it's virtual reality, I'm seeing separate platforms that do that better like vrchat. I'm not really sure who it's for.
[deleted] t1_iutqjzr wrote
> I'm not seeing any revolutionary technology with the metaverse
You don't think real-time, 3D, social experiences are more compelling, useful, and better than 2D ones, or that developing those doesn't require technology that will be revolutionary? You can look at some of the stuff they're doing, and willing to show right now, and it looks reasonably compelling to me - Codec Avatars, high resolution scanning of objects into VR/AR, etc.
> I'm seeing separate platforms that do that better like vrchat.
There's a significant difference between a piece of software that's designed to leverage only existing hardware and software capabilities to let people use them to have "voice chat with avatars", and the kind of hardware and software work that a first-party headset company like Meta can spend billions of dollars developing. "VR Chat" doesn't have the financial leverage to drive the development of VR as a technology long-term, it just uses whatever exists already to make a proof-of-concept (anarchic) social experience, which can really only go as far as Unity and existing hardware can. It doesn't exist at all without other, much larger, companies making all the hardware, APIs and engines for them to use (and then actually letting them use them). It (correctly) exploited the market opportunity that was created when none of the companies that released headsets launched with a compelling first-party social experience, but I virtually guarantee that it disappears altogether once these companies start to devote financial firepower to competing with them for mindshare, because nobody that makes a headset is going to eschew a first-party social experience ever again, especially as in-headset cameras for face-tracking become the norm.
Eventually, when the entire space is more mature, there will probably be interest in an "open social platform" again, but I don't expect early competitors like VR Chat to be able to keep up as the space rapidly progresses and fragments over the next few years, and as more platform are added (notably, Apple). I expect we'll have a large number of 'walled gardens' develop and diverge, and then eventually reconverge toward open platforms, when the business opportunity becomes large enough to attract talent and major investment, as with social media in the 2000s.
> I'm not really sure who it's for.
I agree, I don't think Meta has articulated their vision well. That said, I think VR today is basically a "dorky precursor", with bad UX and palatable primarily to technology enthusiasts, to the VR of tomorrow, in the way that BBS/Usenet/IRC were the dorky precursors to the version of the internet that exists today, and that has a UX that is palatable to everyone.
ninjasaid13 t1_iutuvgh wrote
I watched the video and it seems they have a lot of cool technology but unfortunately it seems that none of it was actually used and what they showed didn't wow anyone, if it was me in charge I would use some of the technology shown in the video in the actual metaverse to impress and build hype instead of what we got which in many cases is worse than the technology we have today.
I can't imagine the connection between what we got and what they have in the labs.
[deleted] t1_iuu6k35 wrote
I think most of what they demoed is in the phase of "technically possible, but not consumer-ready yet".
Like, Codec Avatars. They initially accomplished 1.0 with a big camera-sphere. Neat, but not practical. We can't have every person visit a commercial camera-sphere to get an avatar.
So then they figure out how to do it in a way similar to FaceID - take a video of your face from a bunch of sides with a smartphone, and then do a bunch of photogrammetry post-processing on it, and build a map of the user's face. Consumers can do that with devices they have today. I think they've still said it takes many hours of processing, and then Codec 2.0 still requires the elongated headset they showed the other man using to animate their mouth properly, but I think that's what's coming for consumers in the future, and now that they're sure it's technically possible, they can start to optimize toward that very desirable endpoint, to achieve this result more quickly and easily.
Now, they also have to combine this stuff with high-res environments, to avoid this being too uncanny; you don't want your high-res avatars in a cartoon environment. So this is where item scanning comes in. Starts small, same basic technology as face-scanning, but ends with a user being able to digitally import a whole room, or an intersection of a major city, or whatever.
Luckily, game engines and hardware are "cooperating" with this timeline. You can look at Unreal Engine 5 demos, like Matrix City or the Train Station to see where that will be in the near future. Intel and Nvidia are constantly out there showing new "real-time raytracing" demos (for example, and this) as lighting continues to be optimized as well.
> I can't imagine the connection between what we got and what they have in the labs.
If I was to hazard a guess, it's partly them struggling to normalize/introduce it to people, and partly producing an MVP so they can observe how people 'use' it, and iterate as they discover what the real sticking points of the tech are. I think everyone knows that VR has an "input mechanism problem", in a number of places, and you can see them moving toward fixing it.
From a "hands" perspective, they introduced tracked controllers as the obvious MVP, but they're clearly also examining what the minimum necessary hand tracking is to allow a user complex and useful input options, in a way that's unobtrusive and intuitive, using on-device processing of small motor movements.
You instinctively want to "move" in VR, but this isn't compatible with the average person's real environment. If you virtualize movement, you end up with an inner-ear disconnect, and this makes people sick. Many companies, including Meta, are choosing native AR as medium short-term solution, to marry the virtual and real environments together, so the user can navigate their real environment safely, since nobody but the enthusiasts are willing or able to have a "VR room" to facilitate safe movement.
Artanthos t1_iusq4u1 wrote
Two sides of the same coin as far as Meta is concerned.
aVRAddict t1_iusrw80 wrote
The metaverse will be powered by AI.
ObjectiveDeal t1_iuupf2a wrote
If they can figure out the metaverse they would already had control of ai
Viewing a single comment thread. View all comments