evanthebouncy
evanthebouncy OP t1_je0d4mj wrote
Reply to comment by eamonious in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
You might be better off asking it binary questions such as which word is more common and which is more rare.
Then attempt to sort it.
evanthebouncy OP t1_jdxx14h wrote
Reply to comment by [deleted] in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
try it and let me know
evanthebouncy t1_ja8qwgw wrote
Hi, I have a PhD. Let me try an answer.
The tldr is that undergraduate focuses on learning, and graduate/PhD focus on discovering.
Refer to this diagram that others have linked: https://www.reddit.com/r/PhD/comments/u65rnp/a_phd_explained_in_a_few_diagrams/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button
Long version below:
In undergraduate degree, your primary job is learning. For the first 2 years, think of it as more difficult highschool -- a general, broad education. In the last 2 years of undergraduate, you'd pick a major and focus heavily on its selected courses -- imagine taking 4 hours of biology classes everyday. Similar to your previous educations, your performance is evaluated on whether you score well on a test -- things that the teacher knows an answer ahead of time. You're acquiring the accumulated knowledge of humanity.
Graduate degree has two levels. You either do a masters (2yrs typically), or you continue after masters to do a PhD(3+ more years on top). I'll explain PhD first.
In a PhD program, your primary job is discovering. Unlike all previous education, in a PhD program nobody knows the answer ahead of times. Your "exam" is more like a class project on steroids: years of research in testing a hypothesis (that nobody has thought of before) by running experiments, and writing up your findings in a scientific paper. Your paper is evaluated by a group of scientists expert in the field (called peer reviews). Once you're done enough original research, preferably with paper publications, you write up a thesis summarizing your original works and graduate.
You know the term "scientist" right? During and after a PhD is when you get to call yourself a scientist, as you'd experienced what it's like pushing the boundaries of human knowledge.
evanthebouncy t1_j7wjt36 wrote
Reply to comment by CeFurkan in [D] Are there any AI model that I can use to improve very bad quality sound recording? Removing noise and improving overall quality by CeFurkan
don't put your email in public like this. dm the guy. remove the email while you still can.
EQ and Compression are good techniques to try, reaper is free. I'm sure your friend can show you.
evanthebouncy t1_j7mntx2 wrote
Reply to Wouldn’t it be a good idea to bring a more energy efficient language into the ML world to reduce the insane costs a bit?[D] by thedarklord176
Good points but python is NOT the problem.
evanthebouncy t1_j6wpf34 wrote
I made a bet in 2019 to _not_ learn any more on how to fiddle with NN architectures. It paid off. Now I just send data to a huggingface API and it figures out the rest.
What will change? What are my thoughts?
All well identified problems become rat races. If there's a metric you can put on it, engineers will optimize it away. The comfort of knowing what you're doing has a well-defined metric is paid for in the anxiety of the rat race of everyone optimizing the same metric.
What do we do with this?
Work on problems that don't have a well defined metric. Work with people. Work with the real world. Work with things that defies quantification, that are difficult to reduce to a mere number that everyone agrees on. That way you have some longevity in the field.
evanthebouncy t1_j45eptc wrote
Reply to [D] Has ML become synonymous with AI? by Valachio
No.
AI is about problems. ML is a solution to these problems.
evanthebouncy t1_j2lsn7h wrote
Wait 1 year until we have something like chatgpt but with vision integrated in. Currently it's typically an ocr followed by some nlp. But in a year it can be as simple as give a few examples of what you want done (few shot prompting) in a single model hosted online somewhere
I'd wait a bit more.
evanthebouncy t1_j2h4ir2 wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
Dm has a paper on sorting.
They asked a nn to sort using only pointer movements, and swap operations. They ended up with a cool generalizable sort on longgggggerrrrrr arrays.
Firgot the name of the paper thou
evanthebouncy OP t1_j23cx41 wrote
Reply to comment by Shir_man in [Project] I ask ChatGPT to draw and explain 100+ programmatic SVG images by evanthebouncy
Haha
What's your twitter handle I'll follow it
evanthebouncy OP t1_j22d2ay wrote
Reply to comment by Shir_man in [Project] I ask ChatGPT to draw and explain 100+ programmatic SVG images by evanthebouncy
Woah the Mona Lisa man himself!
Yeah ofc I'm aware of your work. I think everyone generating SVG has used variants of your prompt. Nice to meet you
evanthebouncy OP t1_j22bzqb wrote
Reply to comment by suspicious_Jackfruit in [Project] I ask ChatGPT to draw and explain 100+ programmatic SVG images by evanthebouncy
Ya i find if you ask it to draw X where X isn't commonly drawn ie you can ask it to draw boogieboogie, it'll default to drawing a person
Submitted by evanthebouncy t3_zxef0f in MachineLearning
evanthebouncy t1_j1zo0rp wrote
Reply to [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
hey, I work on program synthesis, which is a form of neuro-symbolic reasoning. here's my take.
the word "neuro-symbolic" is thrown around a lot, so we need to first clarify which kinds of work we're talking about. broadly speaking there are 2 kinds.
- neuro-symbolic systems where the symbolic system is _pre-established_ where the neuro network is tasked to construct symbols that can be interpreted in this preexisting system. program synthesis falls under this category. when you ask chatgpt/copilot to generate code, they'll generate python code, which is a) symbolic and b) can be interpreted readily in python
- neuro-symbolic systems where the neural network is tasked to _invent the system_. take for instance the ARC task ( https://github.com/fchollet/ARC ), when humans do these tasks (it appears to be the case that) we first invent a set of symbolic rules appropriate for the task at hand, then apply these rules
I'm betting Demmis is interested in (2), the ability to invent and reason about symbols is crucial to intelligence. while we cannot understate the value of (1) , reasoning in existing symbolic system is immediately valuable (e.g. copilot).
some self-plug on my recent paper studying how people invent and communicate symbolic rules using natural language https://arxiv.org/abs/2106.07824
evanthebouncy t1_j1y4msw wrote
Reply to [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
stop hiding their hands !
evanthebouncy t1_j1821q9 wrote
Reply to comment by farmingvillein in [D] Hype around LLMs by Ayicikio
By a bigram.
evanthebouncy t1_j0d1op6 wrote
I work a lot with human AI communication, here's my take.
The issue is our judgement (think value function) on what's good. It's less to do with what the AI can actually do, but more with how it is being judged by people.
Random blotches of colors shapes in an interesting way on a canvas is modern art. It's non intrusive and fun to look at. A painting with less than perfect details such as having goblin hands with 6 fingers (as they often do in AI generated arts) isnt a big deal, as long as the overall painting is cool looking.
A music phrase with 1 wrong note, one missed tempo, one sound out of the groove would sound absolutely garbage. We expect music to uphold this high quality all the way through, all 5 minutes. No 'mistakes' are allowed. So any details the AI gets 'wrong' will be particularly jarring. You can mitigate some of the low level errors by forcing AI to produce music within a DSL such as MIDI, but the overall issue of cohesion will be there.
Overall, generative AI lacks control or finesse over the details, lacks logical cohesion. These aren't problems for paintings as much as music.
evanthebouncy t1_ixggkxo wrote
Don't do things to prove to yourself you can do things. Have a goal and meet it with minimal efforts.
Self imposed difficulty 9 out of 10 times is because you haven't got a clear goal.
evanthebouncy t1_ixfj2xt wrote
Reply to comment by [deleted] in [R] Human-level play in the game of Diplomacy by combining language models with strategic reasoning — Meta AI by hughbzhang
wrong sub-reddit.
we here read more than 2 words.
evanthebouncy t1_ixfiy4t wrote
Reply to comment by Amortize_Me_Daddy in [R] Human-level play in the game of Diplomacy by combining language models with strategic reasoning — Meta AI by hughbzhang
iirc FAIR has work playing hannabi, which require some level of (non-verbal) communication. So a lot of the insights can be leveraged here as well.
evanthebouncy t1_iwki8fr wrote
Reply to comment by PandaReturns in Voter turnout in Brazil, by age group - 2022 Presidential elections, 2nd round (30 October 2022 ) [OC] by PandaReturns
what do you make of this mandatory voting? on one hand it sounds nice but on the other hand it's also perhaps forcing an opinion for those who don't have one
evanthebouncy t1_iw66ov5 wrote
- I took a bet that all the training and architecture will be subsumed into some centralized company, where you only really have to worry about dataset.
So in a way it paid off. Now I do everything with hugging face transformers and only worry about dataset haha.
evanthebouncy t1_ivhesll wrote
Reply to [D] At what tasks are models better than humans given the same amount of data? by billjames1685
374637+384638/27462*737473-384783+48473/38374/38474
evanthebouncy OP t1_je0dblt wrote
Reply to comment by pinkballodestruction in [P] two copies of gpt-3.5 (one playing as the oracle, and another as the guesser) performs poorly on the game of 20 Questions (68/1823). by evanthebouncy
Yes. This is underway