Sirisian
Sirisian t1_iw0327l wrote
Reply to comment by Thatingles in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
> In many ways I hope the transition to AI is fairly slow, because society isn't prepared in the slightest.
It should be gradual in most cases simply because of hardware limitations and foundry costs. The slight problem is gradual might be just over 22 years until things get fuzzy. The important part is this should be enough time for each wave of advances to be normalized in society. Each advance gets PR and articles and society gets used to seeing it. Remember when computers could put a rectangle around people and objects and label them? It was a huge thing. Then they could scan faces, also a big thing, then it normalized and we unlock our devices with it. Then we had self-driving cars going around cities using more advanced versions. Things like text to image and diffusion inpainting are a recent example. People use it now to fix images or generate ideas and the stories are slowing down as it normalizes. (Some even find it boring already which is telling).
As computers advance there should be a delay from the specialized AI creating a faster chip, to the foundry being able to make it, then mass production, and applying it to old and new problems. As long as this delay is a few months long I think humanity will adapt. This is the optimistic viewpoint though as nothing says these delays can't shrink or optimize over time after it happens a lot.
Sirisian t1_ivv7m7a wrote
Reply to Waymo launches the world's most advanced robo-taxi service. Rides can be hailed to go anywhere in downtown Phoenix, Arizona, 24/7, and with no safety driver by lughnasadh
I cannot wait to see the no steering wheel vehicles on the road when they're finalized. The US a few months ago okayed such vehicles. Some people asked before why they had steering wheels and this is why. It's a whole regulatory process.
Sirisian t1_ivmmshb wrote
Reply to comment by wolfgang187 in Alphabet’s Wing drones will soon be delivering DoorDash orders in Australia by prehistoric_knight
If they can get the ballistic trajectory just right we could have loitering hotdog drones which makes it all worth it. "hey Google, hotdog" raises hand
Sirisian t1_iv7tqjb wrote
Reply to Researchers create a Drone That Can 'See Through Walls' With Wifi | At the University of Waterloo recently fixed one up with a scanning device that is the definition of invasive. by chrisdh79
Wi-Fi device positioning is incredibly old. Students at my university used a similar technique for triangulating people. Using a drone though is interesting as it can quickly fly around getting a lot of measurements at specific locations.
Sirisian t1_iub37jw wrote
Reply to [N] Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333 by Singularian2501
He didn't ask my question, darn. He turned the various camera questions people asked into a vague camera question. Kind of figured that would happen. He tends to not get into specific questions which is unfortunate.
Sirisian t1_itxdghs wrote
Reply to Engineers at UNSW have found a way to convert nerve impulses into light, which could lead to nerve-operated prosthetics and brain-machine interfaces. by unswsydney
> Now that the researchers have shown that the optrode method works in vivo, they will shortly publish research that shows the optrode technology is bidirectional – that it can not only read neural signals, but can write them too.
Woah, that is a huge deal. That's the advantage of a neural implant (other than permanent connections); the ability to write feedback to the brain just like muscles do. That allows controlling prosthetics without looking at them and much faster learning in theory.
Sirisian OP t1_itmkn1o wrote
Reply to comment by bigboyeTim in A perspective on mixed reality and industry directions by Sirisian
In a r/virtualreality thread years ago people were talking about living life with literal rose-tinted glasses. The idea that one could augment even a dystopian world into a more ideal colorful one. A common example is real life ad-block, but it could make the world more vibrant also or fantastical.
Sirisian OP t1_itmjdp1 wrote
Reply to comment by Dull_Veterinarian_33 in A perspective on mixed reality and industry directions by Sirisian
> I still don't see how it could be sold to a large public.
> Either you need theses devices for very complex task, or for very simple task.
> For everything in between it s not really needed.
This is a very real point. I kind of tried to show a similar viewpoint that a non-mainstream device can only do the "basic" stuff and thus has very little utility. Similar to Meta trying to do very basic collaborative conferences, but being unable to do much more complex scenarios. Their recent realization about people expecting legs on avatars for instance showing even their basic experiences required some more advanced features. When the hardware doesn't accommodate that it falls short.
I really don't think it can be sold to the public until it's a complete mainstream device that covers basic to complex tasks. It's one of those devices where you hand it to someone and once it's there people will be like "okay, I can't go back to 2D displays". It's something you'd have at work and home and is part of your life like a cellphone once was. Anything that you just take off and set to the side will end up similar to VR headsets.
Sirisian OP t1_itmdmuo wrote
Reply to comment by BlaineBMA in A perspective on mixed reality and industry directions by Sirisian
I was very critical of Glass back then and thought it was going to poison the well of AR by having one display. I did not expect it was the camera part that showed up in the news. I keep wondering if that will comeback with MR which will have way more cameras.
I think the tablet setup for self-driving cars will probably be what we see for a while. Especially when the steering wheels are removed. The main thing is you need some form of input and touch is intuitive.
> Meta VR appears to require more of a disconnect from reality.
That's the issue of using passthrough video signals. Can't see people's eyes or anything and for enterprise and collaborative systems with people in a room it's not ideal. I don't envy the people trying to sell such systems. Hololens for enterprise is much closer to what people expect and even then the hardware is fraught with issues.
Sirisian OP t1_itm74n9 wrote
Reply to comment by kimmeljs in A perspective on mixed reality and industry directions by Sirisian
Yes, that is a huge issue. I hope that 240Hz+ refresh rate along with 10K Hz eye tracking will go a long way to alleviating vergence-accommodation conflicts or simulating some of what a lightfield display offers. I don't foresee them being perfect though, but also people are somewhat adaptable. VR displays are mediocre and most of us get over the imperfections and nausea fairly quickly. What long-term issues that could cause with MR use will be interesting to track.
Sirisian t1_it8yi9y wrote
Not sure I'm buying this. MicroLED as mentioned should be fine in terms of power usage, so it's not the display technology.
> More video processing means more transistors in the TV’s System n Chip (SoC) IC compared to a 4K version, and so, more power consumption.
Using an 8K capable decoder and modern fabrication should handle this fine. This ignores that in the time since 4K that chips are smaller and more energy efficient. Now trying to use older hardware and run at 8K and it will draw a lot of power, but that's exactly what regulation like this is trying to prevent. Devices that last years drawing way more power than they should.
Sirisian t1_is70wwy wrote
Reply to [N] First RTX 4090 ML benchmarks by killver
I'm hoping Samsung gets their GDDR7 modules out fast into the Ti models. If so the memory bottleneck will be basically gone. It'll go from 1 TB/s to 1.728 TB/s.
Sirisian t1_irfm6ho wrote
Reply to comment by rickg in AI tool can scan your retina and predict your risk of heart disease ‘in 60 seconds or less’ by chrisdh79
Not necessarily. A lot of health complications start as subtle symptoms that grow. With enough independent data it's possible to extrapolate current blood vessel configurations with future states. That is even though a lot of samples are different people there's a transition from "good" to "bad" states that should be detectable at certain points. Just reading the current health might be enough to determine if one is on the slope toward bad health by identifying features in the blood vessels that aren't in healthy sample.
Being able to do very detailed scans like they are seems to give them this level of features to make these predictions. That said you're probably right. Having scans over time from the same individuals would probably help even more with seeing any good/bad transitions.
Sirisian t1_irfidkr wrote
Reply to comment by bajo2292 in AI tool can scan your retina and predict your risk of heart disease ‘in 60 seconds or less’ by chrisdh79
It's mentioned in the article:
> The software works by analyzing the web of blood vessels contained within the retina of the eye. It measures the total area covered by these arteries and veins, as well as their width and “tortuosity” (how bendy they are). All these factors are affected by an individual’s heart health, allowing the software to make predictions about a subject’s risk from heart disease just by looking at a non-invasive snapshot of their eye.
It's also mentioned their dataset is skewed toward UK demographics so they'll need more data to cover a wider population.
Sirisian t1_iwcxm4r wrote
Reply to comment by panconquesofrito in Waymo’s driverless taxis keep making incremental progress, while others flounder by AdmiralKurita
It's far too profitable in the long run to cancel probably. The big picture includes trucking, replacing an estimated 3.6 million truckers in the US. It has the potential save industries around 200 billion a year which is a massive amount of money even with competition.