icedrift
icedrift t1_je8zqd2 wrote
Reply to [P] Imaginary programming: implementation-free TypeScript functions for GPT-powered web development by xander76
This is really cool! Maybe I'm lacking creativity, but why bother generating imaginary functions and introducing risk that they aren't deterministic when you could just hit OpenAI's API for the data? For example in your docs you present a feature for recommending column names for a given table. Why is the whole function generated? Wouldn't it be more reliable to write out the function and use OAI's API to get the recommended column names?
icedrift t1_jc37xz7 wrote
Reply to comment by dojoteef in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Probably
icedrift t1_jc37aym wrote
Reply to [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Is the demo broken for anyone else? I can't get past their I agree button
icedrift t1_j9uwrfk wrote
icedrift t1_j9uwkrx wrote
Reply to comment by CactusOnFire in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.
Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.
icedrift t1_j9uuocn wrote
Reply to comment by darthmeck in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Appreciate it! Articulation isn't a strong suit of mine but I guess a broken clock is right twice a day
icedrift t1_j9umeg4 wrote
icedrift t1_j9s5640 wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What freaks me out the most are the social ramifications of AIs that pass as humans to the majority of people. We're still figuring out how to healthily interact with social media and soon we're going to be interacting with entirely artificial content that we're gonna anthropomorphize onto other humans. In the US we're dealing with a crisis of trust and authenticity, I can't imagine generative text models are going to help with that.
icedrift t1_j9equiv wrote
Reply to comment by PredictorX1 in [OC] % of American students taking a foreign language class by state by ASoloTrip90000
Personally my comprehension is still decent and speaking is horrible. If you don't use it you lose it BUT after taking 7 years of Spanish classes I'm confident I could pick up a romance language pretty quickly if I had to.
icedrift t1_j9do3kd wrote
Reply to comment by PredictorX1 in [OC] % of American students taking a foreign language class by state by ASoloTrip90000
I think course requirements are probably a bigger factor. In New York you need 2-3 years of foreign language classes in order to graduate high school, and all of the state colleges require foreign language credits.
icedrift t1_j8mdlqy wrote
Reply to comment by anaIconda69 in An Idea: Your digital diary can be more useful thanks to the power of LLMs by Pro_RazE
It would struggle with with questions that span against too much text. Like say you asked it, "how long was I working at McDonalds?" in a daily diary and you worked at McD's for 2 years. They don't have enough context.
icedrift t1_j64pc99 wrote
Reply to comment by CandyCoatedHrtShapes in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Not everything existential threat is political propaganda. Climate change being a good example.
icedrift t1_j571qce wrote
Reply to comment by unsteadytrauma in [D] Simple Questions Thread by AutoModerator
I'm pretty sure GPT-J 6B requires a minimum of 24gigs of VRAM so you would need something like a 3090 to run it locally. That said I think you're better off hosting it on something like collab or paperspace.
icedrift t1_j539w6r wrote
Reply to comment by DungeonsAndDradis in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
Whisper too!! Being able to efficiently train on audio gives us so much more data to work with and we're going to need it. GPT models are already running out of training data.
icedrift t1_j538wbw wrote
Reply to comment by AsuhoChinami in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
Yeah of course; but that's 2050, not 2027 as metaculus predicts.
icedrift t1_j537xjq wrote
Reply to comment by AsuhoChinami in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
I'm inclined to trust the people actually building AI. 50% or experts agreeing AGI is likely in the next 30 years is still pretty insane. Personally I think a lot of the AI by 2030 folks are delusional.
icedrift t1_j535agx wrote
Reply to comment by AsuhoChinami in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
He's not wrong... In a 2017 survey distributed among AI veterans only 50% think a true AGI will arrive before 2050 https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
I'd be interested in a more recent poll but this was the most up to date that I could find.
EDIT: Found this from last year https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022
Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really knows that's certain.
icedrift t1_j534dwy wrote
Reply to I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
Does metaculus only poll people in the field and verify credentials or can anybody submit an estimate? If it's the latter, why take any stock in it? AI seems like one of those things that attracts a lot of fanatics who don't know what they're talking about.
Polls of industry veterans tend to hover around a 40% change of AGI by 2035
icedrift t1_j4w3vss wrote
Reply to comment by jumpsteadeh in Apple Delays AR Glasses, Plans Cheaper Mixed-Reality Headset by GadnukBreakerOfWrlds
LMAO. Fair point but judging by how difficult some of those are compared to the past I'd imagine Googles object detection is getting pretty good.
icedrift t1_j4v570y wrote
Reply to comment by Actually-Yo-Momma in Apple Delays AR Glasses, Plans Cheaper Mixed-Reality Headset by GadnukBreakerOfWrlds
Real time object classification isn't a fever dream, neural networks that classify objects can be run on very modest hardware today. The tricky part is making the glasses stylish and not having cords connecting them to your phone.
icedrift t1_j3u9rzf wrote
This guy's whole channel is really good https://www.youtube.com/@EdanMeyer/videos
It's less about the singularity and more about Machine Learning in General but unlike a lot of the sensationalized garbage you'll find on youtube, it's very educational.
If you're looking for something that's easier to digest Robert Miles is really good at breaking down AI allignment into funny entertaining videos https://www.youtube.com/@RobertMilesAI
icedrift t1_j3qyl1w wrote
Reply to comment by Helpful_Opinion2023 in A Singular Trajectory: the Signs of AGI by mjrossman
I gotchu. First thing you need to do is learn Python. You don't need to be a master by any means but you should understand variables, expressions, functions, classes, packages/dependencies, file systems, and basic algebra. Run through this amazing book and you'll understand plenty to get into the ML side of things.
Once you know a bit of Python complete this course Practical Machine Learning for Coders. This is an extremely highly regarded modern crash course to machine learning that is bringing a lot new people into the industry. In the very first lesson you'll build an image classifier that didn't even exist 5 years ago.
As you go deeper and deeper Math becomes more important but CS isn't really necessary.
icedrift t1_j3qxhe5 wrote
Reply to comment by Helpful_Opinion2023 in A Singular Trajectory: the Signs of AGI by mjrossman
I take it you haven't watched or read altered carbon. It would be like if only billionairs were immortal and the rest of us were slaves under threat of having to pay off our debts by selling our physical bodies to our debtors.
icedrift t1_j3q1prn wrote
Reply to Poll about your feelings on AI by Ginkotree48
Fix ur prompts
icedrift t1_je9i0wk wrote
Reply to comment by xander76 in [P] Imaginary programming: implementation-free TypeScript functions for GPT-powered web development by xander76
What I mean is, why generate the function when only the data needs to be generated? Let's say I need a function that takes the text content of a post and returns an array of recommended flairs for the user to click. Why do this
/**
* This function takes a passage of text, and recommends up to 8
* unique flairs for a user to select. Flairs can be thought of as labels
* that categorize the type of post.
*
* \@param textContent - the text content of a user's post
*
* \@returns an array of flairs represented as strings
*
* \@imaginary
*/
declare function recommendedFlairs(textContent: string) : <string[]>
When you could write out the function and only generate the data?
async function recommendedFlairs(textContent: string) : <string[]> {
const OAIrequest = await someRequest(textContent);
const flairs = formatResponse(OAIrequest);
return flairs
}
In writing all this out I think I figured it out. You're abstracting away a lot of the headaches that come with trying to get the correct outputs out of GPT?