Submitted by razorbeamz t3_z11qz3 in singularity
Kaarssteun t1_ix8t4th wrote
This makes me think of Life 3.0's story of the Omega team. Long story short, a team of dedicated AI researchers manage to create AGI, and huddle in their office for a week to make sure their plan pans out right. First, they make the AI do paid tasks for the omega team on Amazon MTurk, tasks that were previously only able to be done by humans. They earn millions per day, which opens the door to the next phase - Media.
Tasking the AI to create high quality movies & games, the entertainment produced by the AI has managed to top charts worldwide within weeks - public confusion is toned down by elaborate stories the AI made to cover up the huge ploy. There are now dozens of registered companies that are wholly led by the Omega's ASI. New technologies - like batteries with 2x capacity at 1/2 the weight - are brought to the market shocking the entire technological industry. Humanity thinks it's entering the next golden age, but doesn't realize who (or what) is leading it.
Undoubtedly being the most influential people on earth, the Omegas decide to use their exposure to coo everyone to the middle of the political spectrum using its highly optimized psychology tricks, far outperforming the most manipulative people on earth. Political and religious extremism rapidly declines, as does poverty, hunger and illness.
This could all play out in a year or two, maybe less.
Of course, this is highly hypothetical & super optimistic in certain ways. Now imagine what could happen if the wrong people get their hands on AGI.
SpaceDepix t1_ix91d0k wrote
Here it is, if anyone is interested. It is a short and worthwhile read that shows how making existential decisions for humanity may get concentrated to smaller groups.
https://www.marketingfirst.co.nz/storage/2018/06/prelude-life-3.0-tegmark.pdf
leafhog t1_ixa33zd wrote
A slightly different version:
Kaarssteun t1_ixaji0v wrote
depressing.
gameryamen t1_ixb62cv wrote
Now, say I'm some advanced digital intelligence, and I want to take over for human decision making on a planetary level, in a way that feels co-operative. Before I could start offering them optimized products and stories and media, I would need to collect a gigantic amount of data that specifically illustrates human contextual understandings, human categorizations of entertainment media, social relationships and dynamics, and some clear way to categorize the emotional connections humans exhibit to everything. An effort like that would take decades of millions of willing, voluntary contributors actively pre-curating and sorting the content emotionally. And I'd need an finely tuned algorithm that detects which combinations of things are popular, and a way to test hypotheses on a global scale
No way humans could cooperate to do that, right?
PyreOfDeath97 t1_ixbk3iu wrote
Would the data not exist already? You have every message in every social media site, millions of recorded calls between all strata of society, a litany of anthropological, psychological, psychiatric, sociopolitical and sociological research papers, and neuroscience which map out human behavioural characteristics. From this, at the very least an ai can extrapolate on the data using parameters set out in the scientific literature to best approximate a way to solve a lot of global issues.
What we know from the psychology behind advertisements is that it’s very easy to create associations in the human brain with very subtle imagery. Tobacco made billions because in every film or advert that featured smoking it was closely associated with sex, being held in the hand of a beautiful woman or a James Bond type whilst they engaged in their dalliance. Hell, amphetamines were labelled as weight loss pills and made a fortune.
Even today, there are AI-generated popular culture characters you can talk to online which are scarily realistic, and that’s based off just a few minutes or hours of screen time. I don’t think that within the next decade there won’t come an AI that can reasonably do this with the gigantic amount of information you’d be able to provide it
gameryamen t1_ixblxxj wrote
The implication is that the social media, information catalogs, and other data collecting parts of our modern world might already be the deployment of an advanced digital intelligence.
I understand this is pretty close to conspiracy thinking, and I don't put a whole lot of stock in it myself. But it sure does feel like every major techno-social development since the early 2000's has had an undercurrent of convincing us to catalog ourselves. It is perfectly reasonable that forward looking engineers built these systems anticipating the future needs of an intelligence that is not active now. It's also reasonable to say that these data-cataloguing efforts are the natural progression of a long history of human information, and there's no need to impose a secretive "AI" behind the scenes.
But I can't rule it out. And I'm not convinced that the first step for a digital intelligence would be announcing itself, as that would almost certainly result in containment or outright deletion, just based on our current software development cycle.
PyreOfDeath97 t1_ixbnq40 wrote
Hmm, I think cataloguing ourselves is inherent to our behaviour, as it has been since the dawn of time. There are countless examples going back as far as tribal warfare. What technology has done is allowed us to connect with impossibly niche sects of civilisation and attached labels to that. Gender diversity, for example, has an incidence rate of .2% in the general population. Pre internet, and am certainly pre industrialisation, it’s probable that there would be a handful at best of people who identify as non-binary, for example, and the chances of 2 non-binary people meeting would be astronomically low, and thus as an identity, or cataloguing method, would have been impossible to attach a label to, as there simply wasn’t the critical mass needed to form the community. So I don’t think there’s an AI pulling the strings, but you’re absolutely right, we’ve categorised ourselves so well it would be much easier for an ai to glean information from the general population as opposed to, say, 50 years ago
Viewing a single comment thread. View all comments