monsieurpooh
monsieurpooh t1_jefdhy5 wrote
Reply to comment by TFenrir in When will AI actually start taking jobs? by Weeb_Geek_7779
It seems pretty niche. Like you mentioned only 2 openings at your company. The company I work at is pretty huge and I don't think that kind of job even makes up 1/10,000 of our jobs, if at all, because all these tasks are distributed across other people on a rotation, or just part of another job.
monsieurpooh t1_jefc9ai wrote
Reply to comment by TFenrir in When will AI actually start taking jobs? by Weeb_Geek_7779
Ok but how common is it really to have a job position where "the entire job is to make PowerPoint slides"? Seems pretty niche even for large tech companies
monsieurpooh t1_je95k4w wrote
Reply to comment by Mrkvitko in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
You don't need ASI for an AI extinction scenario. Probably skynet from terminator can be reenacted with something that's not quite AGI combined with a few bad humans
monsieurpooh t1_je955oy wrote
Reply to comment by GorgeousMoron in My case against the “Pause Giant AI Experiments” open letter by Beepboopbop8
Are there any open source instruct style models that perform similarly to chatGPT? Which ones have you been using
monsieurpooh t1_ja80m6i wrote
Reply to comment by FoxlyKei in Singularity claims its first victim: the anime industry by Ok_Sea_6214
Yes, any job which one would describe as "grueling" falls in the category of jobs that people only do because they're paid. These should always be phased out because it's a net gain for everyone as long as there's UBI.
The jobs that we should be more worried about are the ones you listed such as musical composition and directing. These are jobs that people genuinely enjoy and would enjoy even if they weren't paid. Automating these is always a double-edged sword because while there's a productivity gain, there's also a "meaning of life" loss.
The objective metric of unemployment is the unemployment rate, which is still low. We don't need UBI until that becomes very high.
Edit: Actually it also depends on wages. Due to wage stagnation I guess you could make the case we need UBI already.
monsieurpooh t1_j9q9xsl wrote
Reply to comment by Nanaki_TV in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
That is an interesting idea, even without the evil AI villain. There was an episode in that Electric Sheep tv show that explored this; I think it was the first or second episode. Of course black mirror also had brilliant ideas about VR but I think this one explored that idea even better than Black Mirror.
monsieurpooh t1_j9pcoov wrote
Reply to comment by AwesomeDragon97 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
Can you explain why?
To be clear I'm talking about actual perfect VR like the Matrix with all 5 senses, not the crap that passes as "VR" today where parkour is impossible, swordfighting is terribly unrealistic because your enemies are required to be ragdolls, and don't even get me started on Judo/wrestling.
A true direct-to-brain VR will be indistinguishable from the real world and, if the user wants, better than the real world in every way. There are 1-2 legit reasons why you would still want to use the real world, but just wanted to make sure your reason wasn't that the real world is more sensory-rich or "feels more real", which won't be the case with advanced technology.
monsieurpooh t1_j9ntc8p wrote
Reply to comment by Several-Car9860 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
99% of these sci Fi fantasies are kinda obsoleted by a perfect VR that can immerse you in any world that's a lot more interesting than real life interstellar exploration. It's also one of the solutions to the Fermi paradox!
monsieurpooh t1_j9ni0aa wrote
Reply to comment by duboispourlhiver in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
I'm curious how the authors made sure to prevent overfitting. I guess there's always the risk they did, which is why they have those AI competitions where they completely withhold questions from the public until the test is run. Curious to see its performance in those
monsieurpooh t1_j9nhlcf wrote
Reply to comment by ihrvatska in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
And one of the AI's specialty is to build better AI's
monsieurpooh t1_j9nh885 wrote
Reply to comment by ninjasaid13 in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Anyone who's a staunch opponent of the idea of philosophical zombies (to which I am more or less impartial) could very well be open to the idea that ChatGPT is empathetic. If prompted well enough, it can mimic an empathetic person with great realism. And as long as you don't let it forget the previous conversations it's had nor exceed its memory window, it will stay in character and remember past events.
monsieurpooh t1_j8ib2fg wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
I agree with that yes
monsieurpooh t1_j8gty61 wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
It is not just a "weighted probability map" like a Markov chain. A probability map is the output of each turn, not the entirety of the model. Every token is determined by a gigantic deep neural net passing information through billions of nodes of varying depth, and it is mathematically proven that the types of problem it can solve are theoretically unlimited.
A model operating purely by simple word association isn't remotely smart enough to write full blown fake news articles or go into that hilarious yet profound malfunction shown in the original post. In fact it would fail at some pretty simple tasks like understanding what "not" means.
GPT outperforms other AI's for logical reasoning, common sense and IQ tests. It passes the trophy and suitcase test which was claimed in the 2010's to be a good litmus test for true intelligence in AI. Whether it's "close to AGI" is up for debate but it is objectively the closest thing we have to AGI today.
monsieurpooh t1_j8e68t1 wrote
Reply to comment by duffmanhb in Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
I think the simplest explanation is just caching, not a formula
monsieurpooh t1_j8e615z wrote
Reply to Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
Both the formulation and the response for this test is amazing. I'm going to be using it now to test other language models
monsieurpooh t1_j71aba4 wrote
Reply to comment by crua9 in ChatGPT Passes US Medical Licensing Exams Without Cramming by RareGur3157
That is true for a lot of jobs. Go to a software engineering interview at a typical big company. Compare the skills you need for the job vs the ones you're being tested for. Very little overlap.
monsieurpooh t1_j71a647 wrote
Reply to comment by em_goldman in ChatGPT Passes US Medical Licensing Exams Without Cramming by RareGur3157
I don't think the AI needs nor recognizes the distinction between the emotional vs logical side; it's just optimized for a task, and there's no theoretical limitation from it being optimized for de-escalation given enough time, training data and perhaps robot body if necessary
monsieurpooh t1_j6lkmwl wrote
Reply to comment by MrCensoredFace in What jobs will be one of the last remaining ones? by MrCensoredFace
Sorry, I did not read the body text. Despite being self-aggrandizing, this post might be more helpful to you https://www.reddit.com/r/singularity/comments/10p5xnq/comment/j6lds65/?utm_source=reddit&utm_medium=web2x&context=3
monsieurpooh t1_j6lds65 wrote
Reply to comment by natepriv22 in What jobs will be one of the last remaining ones? by MrCensoredFace
It mainly applies to specific situations especially where one might use it as an excuse to not do something you want. In 2018 I was onboarded as the musical composer for a short film. ETA for this film was 2 whole years. At that time we already had VQGAN and GPT-2 (maybe GPT-3 I don't remember) and various AI composers like Jukebox. I was thinking wow by the time we're done we'll have AI-generated TV shows and music already. But we still don't have human-level AI movies/music today, and we finished the short film and it's called Let's Eat on youtube.
In 2021 I was playing with GPT-3 Da Vinci on OpenAI and realized it was possible to make a game like AI Dungeon except have it actually be a game. I was like wow by the time I'm done making this game we'll have AI-generated 3D games. But we don't yet really, and AI Roguelite is now on Steam.
There are now tons of people lamenting their choice of majority in CS or whatever just because some AI can sort of write code. They wish they'd been a plumber or whatever. But we don't actually know yet how much the demand for programmers will downsize, if at all.
monsieurpooh t1_j6lcom8 wrote
Reply to comment by Leading-Leading6718 in What jobs will be one of the last remaining ones? by MrCensoredFace
I keep seeing this, but there's nothing special about plumbing compared to any other manual job requiring AGI and a humanoid robot. These include a huge umbrella of jobs e.g. construction, police, etc
monsieurpooh t1_j6lce4v wrote
Reply to comment by lovesdogsguy in What jobs will be one of the last remaining ones? by MrCensoredFace
I'm inclined to agree somewhat; however, we've always been saying that. For 10, 20, 30 years we've been saying "now we're really at the point where it's gonna be vertical." By the way, a fun fact about exponential curves is there is no such thing as "knee of the curve" because everywhere on the curve is the "knee of the curve".
monsieurpooh t1_j6lc1m1 wrote
I keep saying it and I'll say it again... Prostitution. The last jobs to go are those which require either a fully humanoid robot or fully immersive direct-to-brain VR.
monsieurpooh t1_j6c2f75 wrote
IMO 2015 is when the big shift happened, which is after 2013.
I argued with my machine learning friend about neural networks. She claimed that neural networks were "for losers" and not getting anywhere because they required too much data. This was on the heels of the fact that they passed a critical test which, in the past, was postulated as a "test for AI consciousness": Captioning an image. Basically, constant goal-post moving.
It was also on the heels of AlphaGo's victory, which was deemed by most CS experts at the time as impossible or improbable in the near future.
tl;dr 2015 was the year AI proved all the naysayers wrong IMO. And it came after 2013.
monsieurpooh t1_j5968ml wrote
Reply to "I soup ate a word, dog," by Big_Koala_5718
This comes close to embodying the true sense of my actual nightmares (I consider the popular conception of "nightmare" to just be a fun thrilling dream). For me, zombies and monsters aren't scary. You can try to find a shotgun or at least hide. But something like "I soup ate a word dog" which makes no sense... YOU CANNOT RUN FROM IT. It seeps into your thoughts effortlessly and doesn't fit and mold of rationalism that you can make sense of. It is true terror manifested.
monsieurpooh t1_jeffyd0 wrote
Reply to comment by Alchemystic1123 in When will AI actually start taking jobs? by Weeb_Geek_7779
Oh I agree with that; I just think it'd be odd for a company to have a job position that's purely making powerpoint slides