Comments
bottomknifeprospect t1_j996tt1 wrote
Not sure of this sub is like /r/science and if this will get nuked, but it's funny because that line about liquid is almost what the guy in Ex Machina says about his new brain prototype.
94746382926 t1_j99a77a wrote
It's not, posting rules are much less strict here.
talligan t1_j98io1m wrote
So it sounds like they added an extra fitting parameter
lughnasadh OP t1_j96z2qt wrote
Submission Statement
The AI behind self-driving cars could do with a boost. Although some developers are touting Level 5 autonomy "soon", it seems to have been that way for a while. In reality, Level 4 is about the most anyone has advanced to with a commercial product. That's good for set predetermined routes, but the promise of Level 5 is "door-to-door" autonomy.
This seems like quite a fundamental breakthrough. It's interesting to wonder when it will be first commercialized.
AntiworkDPT-OCS t1_j97uj12 wrote
Yeah, this sounds very, very cool. This seems elegant and much more likely to solve these harder problems.
Responsible-Book-770 t1_j97ardu wrote
Astonishing this could be applied for computing and anything honestly wow
Tenter5 t1_j9ao9f5 wrote
Eh it uses approximations and training data. It’s not going to solve some novel problems.
Responsible-Book-770 t1_j9c71at wrote
It can evolve
koopastyles t1_j98xnqv wrote
Typically, the last thing you want is a worm in your network
IceColdPorkSoda t1_j98lmni wrote
This reminds me of the DNA based AI’s from the Hyperion Cantos.
bizarromurphy t1_j98wa2k wrote
Hyperion! What a great set of books. Cybrids?
IceColdPorkSoda t1_j991bc3 wrote
Cybrids were the AI personalities in the megasphere. They were connected to their physical bodies through the void that binds.
charronia t1_j988nc7 wrote
Sounds like a liability nightmare. You put one of these in your cars, without being able to predict what it's gonna do in any given situation because it keeps modifying itself.
Hawk13424 t1_j98wlhk wrote
Kind of like people?
whiteknives t1_j99cw87 wrote
People are, indeed, liability nightmares.
They’re easily distracted, highly variable in vision acuity and intelligence, unpredictable, prone to fatigue, and their judgment is readily compromised by any number of external factors.
If cars were invented today, humans would almost certainly be banned from driving them.
Toysoldier34 t1_j99wx6f wrote
Did you read it and understand what is going on? Machine learning by nature is always evolving and modifying itself, that is what makes it good. That said, it can still be saved in a form that doesn't change, like what they would use as different versions for cars.
Some parts from the article to reread.
> “Their method is beating the competition by several orders of magnitude without sacrificing accuracy,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign. > > As well as being speedier, Hasani said, their newest networks are also unusually stable, meaning the system can handle enormous inputs without going haywire. “The main contribution here is that stability and other nice properties are baked into these systems by their sheer structure,” said Sriram Sankaranarayanan, a computer scientist at the University of Colorado, Boulder. Liquid networks seem to operate in what he called “the sweet spot: They are complex enough to allow interesting things to happen, but not so complex as to lead to chaotic behavior.”
Dillweed999 t1_j97woae wrote
Yeah, they keep saying whatever the latest breakthrough is one of the last puzzle pieces. We'll see...
rogert2 t1_j99amgr wrote
"The reason I was at the adult book store is that my car's worm brain drove me there on autopilot."
"Okay... but you spent $74 dollars there."
"Worm brain, honey. It was the worm brain."
RoosterMcNut t1_j9bcy1u wrote
If those worms can use the turn signal, they’re already better drivers than half the people out there.
FuturologyBot t1_j973vaf wrote
The following submission statement was provided by /u/lughnasadh:
Submission Statement
The AI behind self-driving cars could do with a boost. Although some developers are touting Level 5 autonomy "soon", it seems to have been that way for a while. In reality, Level 4 is about the most anyone has advanced to with a commercial product. That's good for set predetermined routes, but the promise of Level 5 is "door-to-door" autonomy.
This seems like quite a fundamental breakthrough. It's interesting to wonder when it will be first commercialized.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/116kw0h/mit_researchers_makes_selfdrive_car_ai/j96z2qt/
[deleted] t1_j97wnkv wrote
[removed]
[deleted] t1_j98se3t wrote
[removed]
[deleted] t1_j98vw90 wrote
[removed]
[deleted] t1_j992q1n wrote
[removed]
[deleted] t1_j998p4k wrote
[deleted]
[deleted] t1_j99cslh wrote
[removed]
CondiMesmer t1_j99dfyg wrote
Not sure how true this is or how this would pan out. This wouldn't even be a big headline if it were true, but rather that it's been a major breakthrough in ML.
[deleted] t1_j99hek1 wrote
[removed]
[deleted] t1_j99pm03 wrote
[removed]
[deleted] t1_j9a38cz wrote
[removed]
[deleted] t1_j9aa6rs wrote
[removed]
omniron t1_j9avjw4 wrote
This article doesn’t make any sense but I’m too lazy to find the original paper.
ChineseWeebster t1_j9azs7o wrote
With all the billions and time already spent trying to make a good enough ML model, what is preventing the use of a more traditional approach from working?
RegularBasicStranger t1_j99qug7 wrote
When people drive a car, they look at the moving objects around them so quick visual recognition of the position of objects and their expected trajectory is necessary.
So the object's features will need to be used to predict the trajectory, such as the angle the object is facing and their turn signals.
The distance of the object may be determined quickly if there is 3 video cameras pointing at the same direction but can see different distances so the distance is according to which video camera is getting the clear visual.
The videos probably should be low resolution and just 3 colours, red, yellow and green since only these 3 colours have meaning on the road.
thegoldengoober t1_j98xcda wrote
This sounds like the kind of thing required to make an AI system truly general. Right now, as I understand it, no matter how capable we build a system its capabilities remain rigid. I imagine something more plastic could be immensely more capable.
pshawSounds t1_j974uwr wrote
Full article:
While the algorithms at the heart of traditional networks are set during training, when these systems are fed reams of data to calibrate the best values for their weights, liquid neural nets are more adaptable. “They’re able to change their underlying equations based on the input they observe,” specifically changing how quickly neurons respond, said Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Laboratory.
One early test to showcase this ability involved attempting to steer an autonomous car. A conventional neural network could only analyze visual data from the car’s camera at fixed intervals. The liquid network — consisting of 19 neurons and 253 synapses (making it minuscule by the standards of machine learning) — could be much more responsive. “Our model can sample more frequently, for instance when the road is twisty,” said Rus, a co-author of this and several other papers on liquid networks.
The model successfully kept the car on track, but it had one flaw, Lechner said: “It was really slow.” The problem stemmed from the nonlinear equations representing the synapses and neurons — equations that usually cannot be solved without repeated calculations on a computer, which goes through multiple iterations before eventually converging on a solution. This job is typically delegated to dedicated software packages called solvers, which would need to be applied separately to every synapse and neuron.
In a paper last year, the team revealed a new liquid neural network that got around that bottleneck. This network relied on the same type of equations, but the key advance was a discovery by Hasani that these equations didn’t need to be solved through arduous computer calculations. Instead, the network could function using an almost exact, or “closed-form,” solution that could, in principle, be worked out with pencil and paper. Typically, these nonlinear equations do not have closed-form solutions, but Hasani hit upon an approximate solution that was good enough to use.
“Having a closed-form solution means you have an equation for which you can plug in the values for its parameters and do the basic math, and you get an answer,” Rus said. “You get an answer in a single shot,” rather than letting a computer grind away until deciding it’s close enough. That cuts computational time and energy, speeding up the process considerably.
“Their method is beating the competition by several orders of magnitude without sacrificing accuracy,” said Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign.
As well as being speedier, Hasani said, their newest networks are also unusually stable, meaning the system can handle enormous inputs without going haywire. “The main contribution here is that stability and other nice properties are baked into these systems by their sheer structure,” said Sriram Sankaranarayanan, a computer scientist at the University of Colorado, Boulder. Liquid networks seem to operate in what he called “the sweet spot: They are complex enough to allow interesting things to happen, but not so complex as to lead to chaotic behavior.”
At the moment, the MIT group is testing their latest network on an autonomous aerial drone. Though the drone was trained to navigate in a forest, they’ve moved it to the urban environment of Cambridge to see how it handles novel conditions. Lechner called the preliminary results encouraging.
The Physics Principle That Inspired Modern AI Art By Exploring Virtual Worlds, AI Learns in New Ways AI Overcomes Stumbling Block on Brain-Inspired Hardware How Transformers Seem to Mimic Parts of the Brain Beyond refining the current model, the team is also working to improve their network’s architecture. The next step, Lechner said, “is to figure out how many, or how few, neurons we actually need to perform a given task.” The group also wants to devise an optimal way of connecting neurons. Currently, every neuron links to every other neuron, but that’s not how it works in C. elegans, where synaptic connections are more selective. Through further studies of the roundworm’s wiring system, they hope to determine which neurons in their system should be coupled together.
Apart from applications like autonomous driving and flight, liquid networks seem well suited to the analysis of electric power grids, financial transactions, weather and other phenomena that fluctuate over time. In addition, Hasani said, the latest version of liquid networks can be used “to perform brain activity simulations at a scale not realizable before.”
Mitra is particularly intrigued by this possibility. “In a way, it’s kind of poetic, showing that this research may be coming full circle,” he said. “Neural networks are developing to the point that the very ideas we’ve drawn from nature may soon help us understand nature better.”