DukkyDrake
DukkyDrake t1_jduwklh wrote
Reply to AI being run locally got me thinking, if an event happened that would knock out the internet, we'd still have the internet's wealth of knowledge in our access. by Anjz
You would be better off with a copy of the source training text dataset.
DukkyDrake t1_jd0k8x1 wrote
Reply to comment by EvilSporkOfDeath in How long till until humanoid bots in supermarkets? by JosceOfGloucester
It doesn't, but people are stuck on their scifi expectations.
DukkyDrake t1_jcript2 wrote
Reply to comment by Nukemouse in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
>The cost to keeping humans around in a state no worse than we currently live in is likely tiny compared to the vast array of resources at your disposal. We hope the arguments presented in this letter will convince you to pay that cost and thereby allow the survival of your creators.
?
DukkyDrake t1_ja9w2y8 wrote
Reply to Brace for the enshitification of AI by Martholomeow
> but inevitably switch to giving us what they want to give us in order to make money.
Do you really think businesses exists to give you free stuff? There is no switch, that's the business model from day 1, they just lack the resources to do fancy stuff when starting out.
DukkyDrake t1_ja7iq37 wrote
Reply to Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’ by Gold-and-Glory
Yes, but those use cases either doesn't matter or isn't unattended. A lot of tasks aren't very important and amenable to high error rates with human oversight. I don't think the AI architecture that will cause the expected mass technological unemployment currently exists.
I don't expect the good case in the near term. Perhaps after 2050 when attrition has claimed the bulk of the 60s generation, they will be the driving force against the good case.
>The Economics of Automation: What Does Our Machine Future Look Like?
DukkyDrake t1_ja67ct2 wrote
Reply to comment by IluvBsissa in Sam Altmans, Moores law on everything - housing by Pug124635
Intel & co haven't exactly been delivering the goods recently, so not high confidence.
DukkyDrake t1_ja3wj9x wrote
Reply to comment by Pug124635 in Sam Altmans, Moores law on everything - housing by Pug124635
I think we won't really know timings until after ~2027(Intel's target) when zetta scale compute starts to come into economic reach. If you tried to build that with today tech, you'll need around 20 nuclear power plants to power it.
The possible futures are wide open.
The Economics of Automation: What Does Our Machine Future Look Like?
DukkyDrake t1_ja3e3op wrote
Reply to comment by Pug124635 in Sam Altmans, Moores law on everything - housing by Pug124635
Given abundant robot labor, I expect the largest component of costs will be the cost of energy and raw materials.
That can mean much cheaper goods and services in general, that's an important part of his $1200/month UBI idea. While you can count on the labor part being cheap, it's not a given the raw materials will be a lot cheaper. There will be massive demand for materials as people spread out from cities, even with massive increase in raw materials production with cheap robot labor. The labor part of installing utilities may not be a prob, but certain equipment you're just not going to make onsite. The most expensive things will still be those made in other people's factories. You just can't hope to be 100% self-sufficient and live a technologically modern lifestyle, even with robot labor.
DukkyDrake t1_ja39rli wrote
>How are you suppose to just get concrete, mdf board and wood etc mined and refined cheaply on site? 90% of sites are fields?
Although the chemical synthesis of concrete & wood is possible, I'm guessing he's probably referring to alternative synthetic materials superior to traditional building materials. That's the direction you would want to go if energy and labor was super cheap.
Exactly how things are done now isn't the only way to do them. There are much better materials possible via materials & chemical engineering, but they're thoroughly uneconomical due to energy/labor costs.
I don't agree with his assessment, unless he's talking about the scale of land ownership he has in the sticks.
DukkyDrake t1_ja2to9m wrote
Reply to comment by AsheyDS in Have We Doomed Ourselves to a Robot Revolution? by UnionPacifik
Aren't you assuming the contrary state as the default to every one of your points the OP didn't offer an explanation.
i.e.: "Yet you've offered no explanation as to why it would choose to manipulate or kill" are you assuming it wouldn't do that? Did you consider there could be other pathway that leads to that result that doesn't involve "wanting to manipulate or kill"? It could accidentally "manipulate or kill" to efficiently accomplish some mundane tasks it was instructed to do.
Some ppl thinks the failure mode is it possibly wanting to kill for fun or to further its own goals, while the experts are worried about it incidentally killing all humans while out on some human directed errand.
DukkyDrake t1_ja1ettj wrote
Reply to comment by AsheyDS in Have We Doomed Ourselves to a Robot Revolution? by UnionPacifik
> Yet you've offered no explanation as to why it would choose to manipulate or kill, or why it would have its own motives and why they would be to harm us.
Aren't you making your own preferred assumptions?
DukkyDrake t1_ja0trpv wrote
Reply to The 2030s are going to be wild by UnionPacifik
>But with AI it’s just a matter of modeling the problem and determining the desired end state and letting the machine work out how we get from here to there in the most efficient way possible and problem solved.
The most efficient way possible from some start to some finish might be extremely undesirable.
DukkyDrake t1_j9w96tv wrote
>We’ve gone from horse and buggy to space stations in 100 years.
>What do people not understand about exponential growth?
None of that have anything to do with the current batch of AI tools being fit for a particular purpose. Nothing to do with if/when those tools will be made sufficiently reliable for unattended operation in the real world.
Some people fail to understand, just because you can imagine something in your mind, that does not necessarily mean others can definitely engineer a working sample within our personal time horizon, or ever.
DukkyDrake t1_j9rq1g6 wrote
Reply to comment by sticky_symbols in New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
The thing they're predicting has nothing to do with anything related to GPT.
DukkyDrake t1_j9mgudi wrote
Reply to What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
This will usually be the case. A tool optimized and fit for a particular purpose will usually outperform.
DukkyDrake t1_j98doqz wrote
Reply to What’s up with DeepMind? by BobbyWOWO
We're transitioning to the monetization phase of the journey. This is where we start building out AI services for any and everything under the sun that can generate enough revenue. By the turn of the decade, after all these distributed AI services have permeated all of human society, they will collectively be viewed as an AGI.
DukkyDrake t1_j984g11 wrote
Reply to comment by PeedLearning in What’s up with DeepMind? by BobbyWOWO
A statement from extreme ignorance of reality.
DukkyDrake t1_j8pdsc0 wrote
You might be mixing certain concepts.
https://en.wikipedia.org/wiki/Technological_singularity
No one is really working towards achieving the singularity, but it may come about as a consequence of the pursuits of individual and societal scale goals.
I think you might just be worried about technological unemployment; you don't need a singularity event for that. Technological unemployment might be dystopian depending on your local society's cultural values.
DukkyDrake t1_j8hscfe wrote
Reply to Is society in shock right now? by Practical-Mix-4332
>society has been in shock or denial about the future and its implications for civilization.
There is no shock unless the productizations of current ML progress is directly impacting their lives. Future possibilities don't impinge on people's lives, they still have to get up every morning and go to work to pay their bills.
DukkyDrake t1_j8fvyr5 wrote
Reply to comment by FusionRocketsPlease in Altman vs. Yudkowsky outlook by kdun19ham
A lot of people do make that assumption, but a non-agent AGI doesn't necessarily mean you avoid all of the dangers. Even the CAIS model of AGI doesn't negate all alignment concerns, and I think this is the safest approach and is mostly in hand.
Here are some more informed comments regarding alignment concerns and CAIS, which is what I think we'll end up with by default at the turn of the decade.
DukkyDrake t1_j8fu9jc wrote
Reply to Altman vs. Yudkowsky outlook by kdun19ham
You would see the stark difference If you understood to what alignment really refers.
Altman is a VC, he is in the business of building businesses. Altman is simply hoping for the best, expecting they'll fix the dangers along the way. This is what you need to do to make money.
Yudkowsky only cares about fixing or avoiding the dangers, he doesn't make allowances for the best interests of balance sheet. He likely believes the failure modes in advanced AI aren't fixable.
Who here would stop trying to develop AGI and gain trillions of dollars just because there is a chance an AGI agent would exterminate the human race. The core value of most cultures is essentially "get rich or die trying".
DukkyDrake t1_j8ag2f9 wrote
Reply to comment by TopicRepulsive7936 in Recursive self-improvement (intelligence explosion) cannot be far away by Kaarssteun
If the entire loop isn't fully automated, the parts that still depends on humans will bottle neck each recursive cycle. That means progress will be accelerated over pure human r&d, but no runaway acceleration.
DukkyDrake t1_j897fzr wrote
Reply to The naivety of arguments on both sides of the AGI debate is quite frustrating to look at by Particular_Number_68
>it is surely a huge step towards the path to AGI
No such thing is assured, unless maybe if you're referring to a compositional AGI system. Everything stands on its own merits. Don't discount the possibility that you're subject to the same bias blind spots as those you accuse.
DukkyDrake t1_j7hzdl6 wrote
Reply to The Simulation Problem: from The Culture by Wroisu
Those running sufficiently capable language models could be culpable for committing mindcrimes.
DukkyDrake t1_jdxle2i wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Whatever you want to call it, just keep an eye out for when it can work reliably in the real world without supervision. That’s where most of the value in our world lies. Until then, it will take a lot of old-fashioned engineering to use these tools to make more useful products and services.