Thorusss
Thorusss t1_iylenrb wrote
Reply to comment by C0hentheBarbarian in OpenAI ChatGPT [R] by Sea-Photo5230
>an overqualified prompt engineer.
There are already GPT3 implementation to generate better prompts for text2image AIs...
Thorusss t1_iylei4q wrote
Reply to comment by purplebrown_updown in OpenAI ChatGPT [R] by Sea-Photo5230
>it’s overly confident when it’s obviously wrong.
So too human?
Thorusss t1_ivnwgsq wrote
Reply to [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
My take:
The person inside does indeed not understand Chinese, but the whole system of the room including the human and all instructions does.
The Chinese room is a boring flawed argument, that only is considered relevant by people who get tricked into confusing parts of the system with the whole thing.
Thorusss t1_iusxlia wrote
I mean if you want it and look for it, you can buy a radicicolous amount of objects with an additional microchip in it. Heated insole: chip. Camera in glasses: Chip. T-shirt that measures heartbeat: Chip. Ring that measures temperature and movement: chip. Implant for paying: Chip. Light up Shoes: Chip. Jacket with speakers: chip.
I mean many packagings have an microchip in the small security sticker, than many people never notice.
Thorusss t1_iuj0zgg wrote
Reply to Giant farming robot uses 3D vision and robotic arms to harvest ripe strawberries by Anen-o-me
And while the camera arm is there already, with a small upgrade, it could also remove unwanted insects physically.
Thorusss t1_iu60ls1 wrote
Reply to If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Nice Try Gpt4 web scrapper bot!
Thorusss t1_iu38uuv wrote
Reply to comment by pickettfury in [OC] Racial breakdown of students at Harvard, Yale, Princeton, MIT, Stanford compared to students scoring 1400+ on the SAT by tabthough
Imagine the UN decides that all Americans now have to live in severe poverty (global poor - famines, etc.) for two generation, because their grandparents were born in a rich country.
Makes sense also?
Thorusss t1_iu38nmv wrote
Reply to comment by joeschmoe86 in [OC] Racial breakdown of students at Harvard, Yale, Princeton, MIT, Stanford compared to students scoring 1400+ on the SAT by tabthough
So because black people in the past were unfairly treated, the solution is to treat other races unfairly now? Especially the young people now, who in no shape were responsible for the past?
Thorusss t1_itl79kk wrote
Reply to comment by NefariousNaz in how old are you by TheHamsterSandwich
because quite a few below 20s old had no other choice, because the excluding them for no good reason
Thorusss t1_itl75jy wrote
Reply to comment by [deleted] in how old are you by TheHamsterSandwich
deleted
Thorusss t1_isotb60 wrote
Reply to comment by Clawz114 in What is the potential for AI vs AI conflict in the future? by iSpatha
Well put
Thorusss t1_is9esl2 wrote
The singularity and fast take off are closely related, and make a winner takes all scenario most likely.
One united universe at the highest level with a diverse lush structure below that would be a great outcome.
Thorusss t1_irwgc8y wrote
Reply to comment by Striking_Exchange659 in Any examples of future prediction models? by Mr_Hu-Man
literally all examples I gave predict the future
Thorusss t1_irv7qc8 wrote
Reply to Any examples of future prediction models? by Mr_Hu-Man
Of course. Kind of naive question. It is one of their main uses. What will the user click next, what will the weather do, how will nuclear fusion behave, how will the stock market move, will the car in front of you brake, etc.
Thorusss t1_iruyfxc wrote
The "average" job has been disappearing for more than a century, changing what the "average" job is. It is happening right now, and will just speed up.
Thorusss t1_irm9q77 wrote
Reply to Germans ZEDU Car Is Most Environmentally Friendly Vehicle In The World by Educational_Sector98
I doubt this vehicle is more environmental friendly than a bike, with the production effort alone.
Thorusss t1_irb9hoa wrote
Reply to comment by Ulfgardleo in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
thanks
Thorusss t1_ir9ps2x wrote
Reply to comment by M4mb0 in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
True. But nummerical stability is much more important in long running simulations like weather forecast, than in deep neural network training.
There is a reason they are often benchmarked with single or even half precision.
Thorusss t1_ir9pbcd wrote
Reply to comment by Ulfgardleo in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
So why is matrix multiplication faster with it?:
>Leveraging this diversity, we adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimising arbitrary objectives.
Are you saying it would be slower, if it had to multiply multiple matrixes of the same dimension one after the other?
Thorusss t1_ir9oyt3 wrote
Reply to comment by Lairv in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
So? They had to train once, the more efficient algorithm is now in humanities toolbox till eternity. 10-20% increased speed can probably pay that back this year with the compute DeepMind uses alone.
Thorusss t1_ir9or2m wrote
Reply to comment by Ulfgardleo in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
The algorithm is plain faster on the most advanced hardware. For such an already heavily optimized area, that is very impressive.
>Leveraging this diversity, we adapted AlphaTensor to specifically find algorithms that are fast on a given hardware, such as Nvidia V100 GPU, and Google TPU v2. These algorithms multiply large matrices 10-20% faster than the commonly used algorithms on the same hardware, which showcases AlphaTensor’s flexibility in optimising arbitrary objectives.
https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
Thorusss t1_ir9o8lk wrote
Reply to comment by _matterny_ in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
Especially since the algorithm are specifically faster on the most modern hardware we have right now.
Thorusss t1_iypsfxq wrote
Reply to comment by Sieventer in Clip Studio gives up under pressure by Sieventer
This pressure could even come from people how use and sell AI art, to have a less competition.