PoliteThaiBeep
PoliteThaiBeep t1_ja7mjt6 wrote
It's very interesting and indeed a huge progress, but arguably the most difficult part of story telling is hiring actors.
Maybe it isn't for big studios, but it is for a myriad of small studios. So real progress is when you don't have to hire an actor. When you can use voice generators, and sequential video generation - that's where the real shock is.
Because as soon as this happens to be available to single storytellers without any stuff, even at relatively low quality - it will transform the whole industry.
PoliteThaiBeep t1_j9acefq wrote
Reply to comment by tangent26_18 in What’s up with DeepMind? by BobbyWOWO
When powerful call all the shots it shifts wealth dramatically towards the elite and away from the public reducing quality of life and innovation.
It would also mean any friends and family of powerful would hold the keys to major industry sectors and companies and wouldn't let anyone new in. So encumbents can never be overthrown by a new business (blockbusters -> Netflix)
This is exactly what Russia is - Putin holds all the power and whenever new company comes up who does things in innovative way forcing incumbents out - like Yandex, Vk, Tinkoff and many others - he'd either buy them out for cheap (Yandex) or if it's not successful, threaten, publicly defame on state TV and force CEO out of the country, forcing him to sell for pennies (Vk, tinkoff). All of these companies belong to Putin friends via one or another scheme.
And when you look at the map and export data by country and you wonder how despite such a massive stream of wealth from oil and gas, yet Russian people have the worst quality of life in Europe (tied with Ukraine and Belarus). Many countries have nothing and yet hold significantly better quality of life (Estonia, Singapore, etc)
Basically if you look at a country where some guy/girl who was nobody was allowed to force a powerful corporation out through their innovation and ingenuity - that's a good sign that democracy is working there.
Of course it's not black and white it's a spectrum. If we look at any society decades and hundreds of years ago, their best societies would look far worse than most today, and their worse society would be far worse than north Korea today.
Still it's obvious that more democracy means more progress and, faster innovation, better quality of life and reduced power of the wealthy.
PoliteThaiBeep t1_j8u7809 wrote
If you want to get a good understanding of this whole thing read the following:
"Sapiens", "Home Deus" by Yuval Noah Harari
"Life 3.0" by Max Tegmark
'Human Compatible" by Stuart Russell
"Superintelligence" by Nick Bostrom
"A thousand brains" by Jeff Hawkins, Richard Dawkins
And of course on this subreddit you must at least glance at "Singularity is near" by Ray Kurzwail
There's a bunch of optimists, pessimists and everything in between mixed in here for a good balanced perspective.
All of these are insanely smart people and deserve every bit of attention to what they are saying.
You can also get a short version of all of the above by reading Tim Urban blogpost on waitbutwhy dot com called "super intelligence"
PoliteThaiBeep t1_j8fg0be wrote
Reply to Is society in shock right now? by Practical-Mix-4332
I actually think the opposite. It went from the fringe topic of a select few to everyone talking about it.
Like when did you see singularity mentioned on ML or technology subreddits? It was completely unthinkable just in 2015. It would just be a downvoted/ignored comment. But today it's creeping up so high that it's almost mainstream.
PoliteThaiBeep t1_j8dzg5j wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
The word "singularity" in this subreddit refers to Ray Kurzwail book "Singularity is near". It literally assumes you read at least this book to come here where the whole premise stems on ever increasing computational capabilities that will eventually lead to AGI and ASI.
If you didn't, why are you even here?
Did you read Bostrom? Stuart Russell? Max Tegmark? Yuval Noah Harari?
You just sound like me 15 years ago, when I didn't know any better, haven't read enough, yet had more than enough technical expertise to be arrogant.
PoliteThaiBeep t1_j8dv9is wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
>And more limited than ants. The vast majority of living beings is more capable than chatGPT.
Nick Bostrom estimated to simulate functional brain requires about 10^18 flops
Ants have about 300 000 less, let's say 10^13 (really closer to 10^12) flops.
Chat GPT inference per query reportedly be able to generate single word on a single A100 GPU in about 350ms. That of course if it could fit in a single GPU - it can't. You'd need 5 GPUs.
But for the purposes of this discussion we can imagine something like chatGPT can theoretically work albeit be slow on a single modified GPU with massive amounts of VRAM
A single A100 is 300 Tera flops which is about 10^14 flops. And it would be much slower than the actual chatGPT we use via the cloud.
So no I disagree that it's more limited than ants. It's definitely more complex by at least one order of magnitude at least regarding the brain complexity.
And we didn't even consider training compute load in this consideration, which is orders of magnitude bigger than inference, so the real number is probably much higher.
PoliteThaiBeep t1_j6of5h9 wrote
Reply to comment by crua9 in A McDonald’s location has opened in White Settlement, TX, that is almost entirely automated. Since it opened in December 2022, public opinion is mixed. Many are excited but many others are concerned about the impact this could have on millions of low-wage service workers. by Callitaloss
The average number of people working for a fast food restaurant is 17-something (per shift)
Kitchen staff isn't automated yet. Also someone needs to be there to keep things civil to prevent vandalism and what not. Someone needs to clean.
So say 7 people prepare the food, 3 people throw trash out and keep the restaurant clean and other things. And 1 person is a manager. So we still need 11 out of 17
Also keep in mind these productivity boosts have been happening all the time, like today's fast food workers are significantly more productive than fast food workers of 2005. They need less people per restaurant vs 2005. Yet despite this there are more of these workers employed today in the US vs 2005.
That's not that they wouldn't be automated eventually, but let's not exaggerate - at this point it could be in this experimental phase for years just like Amazon go did. They promised thousands of stores and instead 7 years later we have only 30 tiny stores that almost nobody uses.
PoliteThaiBeep t1_j5mkj69 wrote
After AGI humans are in essence obsolete. I believe we will transcend the human form either way - whether through evolution and merging with ASI or extinction.
This makes me excited and always did - it's really hard to imagine for this to be depressing.
It's like the ultimate dream - vastly smarter, stronger, significantly richer everything including emotions and whatever we desire as well as countless other things we can't possibly understand yet.
In case getting to ASI won't lead to extinction some humans will undoubtedly choose not to transcend. For people like that I can imagine it could be depressing, but otherwise I can't.
PoliteThaiBeep t1_j5dq131 wrote
Reply to comment by visarga in Google to relax AI safety rules to compete with OpenAI by Surur
>Nobody's going to wade through mountains of crap and ads to find a nugget of useful information anymore. Google actually has an interest to have moderately useful but not great results because the faster a user finds what they need, the fewer ad impressions they generate. This practice won't fly from now on.
If Google just sat on a poor search algorithm somebody would come up and overthrow them - duck duck go or about a million others. But they weren't able to do that. Why? Because nobody was able to come up with a better search engine so far.
And now when it's obvious that search engines are past and LLMs are a much better way to go Google is racing with everyone to it.
Everything else is just a wacky conspiracy theory without any substance to it, but invoking magic words "evil corporation" does have an effect regardless of the matter discussed.
Pathetic.
PoliteThaiBeep t1_j5c8jqm wrote
Reply to comment by stievstigma in UBI before riots, possible or a worthless pursuit? by nitebear
I think Andrew Yang called it that right?
But his solution was to introduce VAT taxes and use that to finance UBI to avoid "redistribution" stigma.
I think politically it probably makes sense, but personally I think taxing rich passive income is more beneficial for the economy if implemented.
Right now poor people's major income source is work, investment income is a tiny portion for them. As you go up the bracket investment income tends to be more and more important and for billionaires I think it's over 70% income is in investment. I don't remember exact numbers but I remember the trend.
Despite this investment income taxes are capped at 20%. (For a stock owned for over a year) This creates a natural runaway wealth scenario where more money just makes money with limited incentives to actually create stuff.
I'd say UBI should be financed from the top 1% investment income taxes, make them higher like 30-50% for people with over 10 mil in investment, don't touch regular income taxes, don't introduce VAT.
PoliteThaiBeep t1_j52n424 wrote
Reply to comment by PoliteThaiBeep in UBI before riots, possible or a worthless pursuit? by nitebear
Also I think the most important part of taxing rich is progressive taxing passive income from the stock market. Maybe we don't even need to increase income taxes. Maybe we just need to modify passive income taxes.
It does two things really well:
- Incentives rich to create things instead of just sitting on the money
- Very predictable flow of money to UBI
PoliteThaiBeep t1_j52kvtx wrote
Reply to comment by Fuzzers in UBI before riots, possible or a worthless pursuit? by nitebear
I think public opinion sort of bounces back and forth around issues trying to find "balance".
In the 70s there was a public push to reduce taxes, which overtime resulted in a massive reduction of effective taxes for the rich and since then inequality has been exploding.
A bit of breathing room happened in 90s which calmed the public I think. A lot of good things happened.
At some point hopefully not too late it'll again reach to the point where the public push towards higher taxes for the rich will make sense to part of republican constituents.
I actually think we're very close, we already had basically UBI payments across the whole country during pandemic and nobody bat an eye even though just a few years ago it seemed impossible.
There just needs to be some kind of public catalyst to implement effective taxes level of near 1970s levels and also start UBI at the same time. It doesn't even needs to be large or to be called "ubi". It could be a bunch of different things that would together act as a UBI, but we won't call it that.
PoliteThaiBeep t1_j5030gs wrote
Reply to comment by gay_manta_ray in OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
Uh, solar farms are already below 2 cents/kWh for 20 year contracts in some places like Chile, wind is below 4 cents/kWh right here in the US, I think it was an Arizona contract.
And they were over 15 cents/kWh just in 2014. To point out to rapid economy of scale happening.
For comparison brand new coal and gas plants are relatively flat and stay at around 6 cents/kWh for modern high tech plant.
But just maintaining already existing coal/gas plants is still making a little bit of sense, but the thing is we're very close to when it doesn't make sense. Building a brand new solar and wind farms will be cheaper than maintaining already working coal/gas plant.
That's where we are. It'll almost completely bottom out in about a decade.
And solar requires zero maintenance, wind does require it, but it's price highlights how little it costs.
PoliteThaiBeep OP t1_j4smayv wrote
Reply to comment by No_Ninja3309_NoNoYes in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
>So if the trend continues of the rich getting richer and the poor getting poorer,
That's very US centric. Worldwide poverty was almost eliminated and it's still an ongoing process of raising people out of poverty.
In the US yeah.. since 1972 productivity has gone way up, yet wages stagnated.
PoliteThaiBeep OP t1_j4r2sjc wrote
Reply to comment by PoliteThaiBeep in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
Actually I think I came up with a response myself:
If we're going to get close to very capable levels of intelligence with current ML models this means they are extremely computationally expensive to train, but multiple orders of magnitude cheaper to use.
So that means if this technology principals remain similar there will be a significant time frame between AI generations - which could in principle allow competition.
Maybe we also overestimate the rate of growth of intelligence, maybe it'll grow with significant diminishing returns so say rogue AI being technically superintendent vs any given human might not be superintendent enough to counter whole humanity AND benevolent lesser AI's together.
Which IMO creates a more complex and more interesting version of the world post AGI
PoliteThaiBeep OP t1_j4qwiat wrote
Reply to comment by ShadowRazz in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
I'd like that, but what is your reasoning against intelligence explosion theory?
Like say someone comes up with a system that is smart enough to be able to recursively improve itself faster than humans. Say some AI lab was testing a new approach and came up with a new system that can improve itself. But this cascaded into a very rapid sequence that improved intelligence beyond our wild imaginations faster than humans were able to react.
Nick Bostrom described something like that IIRC.
What do you counter it with?
PoliteThaiBeep OP t1_j4qu7m5 wrote
Reply to comment by OldWorldRevival in Singular AGI? Multiple AGI's? billions AGI's? by PoliteThaiBeep
What you describe is #1 - singular AGI without any caveats.
Which means you probably subscribe to intelligence explosion theory - otherwise it's very difficult to imagine singular entity to dominate.
PoliteThaiBeep t1_j4gmsqe wrote
Reply to comment by Lawjarp2 in This is something that we should keep in mind. by [deleted]
You can buy land incredibly cheap, I don't get it. If anything it's undervalued given fundamental limits of land on earth.
Building a house 2000 SQ ft house with modern technology is ~$400000 give or take. Land prices for a lot to build a house go from almost nothing to $500000 or more in metro areas.
But nothing stops you from building a 400k house on a $20k land lot 2 hours from the metro area. It'll actually be cheaper to buy an existing house there for $350k all together with land. Yeah it'll be older and not as energy efficient, but a house.
Or you can buy literally acres of endless land in the desert for almost nothing.
In fact there's an artificial mechanism for lowering the value of land and houses - it's called property taxes. 1% in California, 2% in Texas IIRC.
PoliteThaiBeep t1_j4gd1ho wrote
Reply to Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
Before chatGPT I expected AGI to arrive between 2025 -2040
Now I still expect AGI to arrive at any time from 2025 to 2040
But percentages changed. Like in 2017 I thought maybe 1% AGI would arrive in 2025. Today it feels like 10-20% AGI would arrive in 2025.
Also before I thought 2040 AGI will arrive with around 50% chance. Today it's more like 90%.
PoliteThaiBeep t1_j41z018 wrote
Reply to Things like ChatGPT being used in some future games for dynamic and realistic NPC engagement by crua9
It's very practical for 3D modelling with blender. When I want vertices to be arranged in a certain way, but there isn't a button that can do that - I just describe to chat gpt what I want and it usually comes up with a useful python script that does exactly what I want after few corrections which is much faster than trying to write script yourself of manually adjust hundreds of vertices by hand typing coordinates one by one.
It's incredibly useful here and I just scratched the surface.
The way it can optimize interfaces is mind blowing!
But I did not have much of a success with more high level tasks like how a game should be structured or anything macro scale. I think a novel human idea is of different quality here and GPT can't really compete here.
PoliteThaiBeep t1_j2xv6x6 wrote
Reply to comment by ReignOfKaos in Your favorite series about AI? by DJswipeleft
Yeah the first one was amazing. Although not at all from a transhumanist perspective it was just an amazing show from a movie goer perspective who appreciates fine art.
But the second season was shockingly bad. I don't even know any other show where I'd rate the first season 9/10 and second season 2/10. Sometimes quality goes downhill a bit by bit, but I've never seen such a dramatic shift from masterpiece to garbage.
The funny thing is I actually like Anthony Mackie as an actor a lot. He was awesome in many, many movies I watched with him since Hurt Locker. But his performance here was completely out of whack I think it's probably not his fault but rather a change in how the film is directed.
PoliteThaiBeep t1_j2xtuyz wrote
Reply to Your favorite series about AI? by DJswipeleft
Not a series but Transcendence was probably the best movie ever with a less Hollywoody and more sciency definition of ASI and singularity.
As a series I've not seen anything better than Black mirror yet.
PoliteThaiBeep t1_j29klik wrote
Reply to comment by Desperate_Food7354 in How are we feeling about a possible UBI? by theshadowturtle
AGI is decades away (hopefully), but we'll need UBI now to account for rapid automation and insane productivity boost humanity will go through well before it reaches AGI.
After AGI all bets are off. We can't even remotely imagine what it'll be like after.
PoliteThaiBeep t1_j1gowpp wrote
Reply to comment by sticky_symbols in Hype bubble by fortunum
You know I've read a 1967 sci Fi book by a Ukrainian author where they invented a machine that can copy, create and alter human beings. And a LOT of discussion of what it could mean for humanity. As well as threat of SuperAI.
In a few chapters where people were talking and discussing events one of people was going on and on how computers will rapidly overcome human intelligence and what will happen then.
I found it... Interesting.
Since a lot of talks I had with tech people over the years since like 2015 were remarkably similar. And yet similarity with talks people had in 1960s are striking.
Same points " it's not a question of IF it's a question of when" Etc. Same arguments, same exponential talk, etc.
And I'm with you that.. but also a lot of us pretend or think they understand more than they possible do or could.
We don't really know when an intelligence explosion will happen.
1960s people thought it would happen when computers could do arithmetic million times faster than humans.
We seem to hang on to flops raw compute power, compare it vs human brain - and voila! - if it's higher we got super AI.
We've since long passed 10^16 flops in our supercomputers and yet we're still nowhere near human level AI.
Memory bandwidth kinda slipped away from Kurzwail books.
Maybe ASI will happen tomorrow. Or 10 years from now. Or 20 years from now or maybe it'll never happen we'll just sort if merge with it as we go without any sort of defining rigid event.
My point is - we don't really know. Flops progression was a good guess but it failed spectacularly. We have over 10^18 flops capable computers and we're still 2-3 orders of magnitude behind human brain when trying to emulate it.
PoliteThaiBeep t1_jdsn19h wrote
Reply to How are you viewing the prospect of retirement in the age of AI? by Veleric
Look ultimately more productivity - better for humanity, basically pie gets exponentially bigger.
For basically all of human history almost everyone you ask would almost always say, "in the good old days was better". People lived better life, better food, etc. It was never true.
Like certain data can swing wildly up and down in certain years like the crime rate, but if you zoom out almost everything is better vs 20 years ago for almost any time period and almost any country.
Now having said that since the 70s we got massively more productive, but quality of life increases were much less pronounced in the US context and most of the expanded pie went towards rich and ultra rich with only smaller bits and pieces to everyone else. But also who are these "rich" changed. And also the whole world on average is massively better today, incomparably better vs 20 years ago.
US is just at the peak of it, if you think about it, globally inequality between countries got smaller. Poor countries people started to earn significantly more, rich countries people ear slightly more.
I'd say it's fine. It should be like that.
So if say 10 years from now humanity is 100% more productive. The way the market forces work, it's just very unlikely the whole humanity will suddenly live worse, despite massively bigger "pie" size. I don't see realistic possibility of it.
It can of course happen in dictatorships like Russia, but look - it has already happened. It's the worst quality of life country in Europe. Despite massive fossil fuel revenues that dwarf what regular folks make country wide. If you give every Russian it's piece of fossil fuel revenue it'll be more money per person than minimum wage. Of course none of that goes to regular folks it all goes to Putin's friends - who use it to buy palaces and yachts for billions upon billions.
And yet even in this nightmarish scenario despite ever increasing inequality and ever increasing numbers of ultra rich Putin's friends in forbes list, massive amounts of property bought all over the world on stolen Russian money (stolen from Russian people)
Despite all of that the quality of life in Russia did not go down. (Surprise!) Yes its growth was slower than that of any democracy, but still it's not worse.
So I'd say at worst it'll stay the same if some horrible dictator comes to power in the US and worldwide.
At best we'll come up with the way to reduce inequality in which case our quality of life might increase even more than the productivity growth.
We are just innately pessimistic - which is a great survival strategy, but terrible for understanding how the world works