sharkinwolvesclothin
sharkinwolvesclothin t1_j6ooyby wrote
Reply to comment by DeathGPT in ChatGPT Content Detector Launched By Stanford University by vadhavaniyafaijan
Your main issue is absolutely trivial - just make a rule that anything detected by the chosen algorithm results in redoing the assignment in class without internet, or even just the following assignment if you accept it as a helper but want to make sure they can do it themselves.
sharkinwolvesclothin t1_j6oeiwf wrote
Reply to comment by mindofstephen in ChatGPT Content Detector Launched By Stanford University by vadhavaniyafaijan
Universities can have other punishments for different forms of academic dishonesty besides kicking the student out. In fact, I've never heard of one that doesn't. Also, accuracy is not necessarily the same for positive and negative cases.
sharkinwolvesclothin t1_j5upsu2 wrote
Reply to The ChatGPT Effect: How advanced AI changes us. We are forced to search for assumptions (instead of raw information) and ask questions more than find answers for innovation, creativity, and progress because ChatGPT readily offers answers. by Iaskquesti0ns
I need to fix the breaks on my bike. What assumptions I'm searching for?
> The greatest cognitive skill in a post-ChatGPT world is going to be: Asking the right questions. And then, Knowing where to ask them.
Yeah it smart to know you chatgpt and this crop of generative models are not good for searching information - they might make shit up about the breaks that will kill me in traffic. I wouldn't knowing to Google the greatest cognitive skill though.
sharkinwolvesclothin t1_iss4h4v wrote
Reply to comment by Desperate_Donut8582 in Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
Agree on both points, I wasn't quite clear. I was trying to argue that if we do get singularity-type AGI, a machine capable of replicating human thought and communication, we will build an endless amount of them, and everything about society will change. You are right, it's not necessarily dystopic or utopic, but it will be different enough that trying to choose a future-proof job is close to useless.
And if we don't get that, and we "just" get amazing tools, I would assume jobs will adapt. Actually, if some fields get more AI tools than others, those fields might grow in the number of people working in them, just in new AI-adjacent jobs we don't recognize yet.
sharkinwolvesclothin t1_isptt3c wrote
Reply to Talked to people minimizing/negating potential AI impact in their field? eg: artists, coders... by kmtrp
You're not crazy, but you do both overestimate and underestimate the speed of change and type of change at the same time.
Sure, if there's a true singularity / general intelligence machine appears that can do anything, we'll pretty much have manna from heaven, society will be rethought, who knows how. You can argue for both dystopian and utopian scenarios. But thinking about which jobs are safe is kinda irrelevant, it'll change everything.
If it's just simple advances, it'll be more like job markets have dealt with technological advances before. For programmers, they will be checking regulatory compliance or fine-tuning something or whatever, or just dealing with legacy stuff (maybe the chatbot eventually can refactor the Fortran code base from the 70s). I'm not sure if artists are exactly thriving money-wise now either, but there would probably be demand for social/performative in person stuff, not all mediums are happening at the same speed (sure you can make a great dall-e photo that looks like a photo of an oil painting, but an actual oil painting will not follow quite immediately, and some people like physical art), they are well positioned to become prompt artists for digital art.. Or a million other options, it's not like I know exactly what will happen in different industries, just like people did not know how things were going to evolve in previous technical revolutions.
sharkinwolvesclothin t1_jdh1nec wrote
Reply to comment by matt2001 in Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations by matt2001
>I hope this can be addressed, as it will be able to run on smaller computers.
These issues are not specific to this chatbot/application. It's just that Stanford people have different incentives to for-profit companies. But yeah, hopefully they can be addressed, as most use cases people have would require the generating models not to have these behaviors.