icedrift

icedrift t1_je9i0wk wrote

What I mean is, why generate the function when only the data needs to be generated? Let's say I need a function that takes the text content of a post and returns an array of recommended flairs for the user to click. Why do this

/**
* This function takes a passage of text, and recommends up to 8
* unique flairs for a user to select. Flairs can be thought of as labels
* that categorize the type of post.
*
* \@param textContent - the text content of a user's post
*
* \@returns an array of flairs represented as strings
*
* \@imaginary
*/

declare function recommendedFlairs(textContent: string) : <string[]>

When you could write out the function and only generate the data?

async function recommendedFlairs(textContent: string) : <string[]> {
const OAIrequest = await someRequest(textContent);
const flairs = formatResponse(OAIrequest);
return flairs
}

In writing all this out I think I figured it out. You're abstracting away a lot of the headaches that come with trying to get the correct outputs out of GPT?

2

icedrift t1_je8zqd2 wrote

This is really cool! Maybe I'm lacking creativity, but why bother generating imaginary functions and introducing risk that they aren't deterministic when you could just hit OpenAI's API for the data? For example in your docs you present a feature for recommending column names for a given table. Why is the whole function generated? Wouldn't it be more reliable to write out the function and use OAI's API to get the recommended column names?

2

icedrift t1_j9uwkrx wrote

I agree with all of this but it's already been done. Social media platforms already use engagement driven algorithms that instrumentally arrive at recommending reactive content.

Cambridge analytica also famously preyed on user demographics to feed carefully tailored propaganda to swing states in the 2016 election.

3

icedrift t1_j9s5640 wrote

What freaks me out the most are the social ramifications of AIs that pass as humans to the majority of people. We're still figuring out how to healthily interact with social media and soon we're going to be interacting with entirely artificial content that we're gonna anthropomorphize onto other humans. In the US we're dealing with a crisis of trust and authenticity, I can't imagine generative text models are going to help with that.

78

icedrift t1_j535agx wrote

He's not wrong... In a 2017 survey distributed among AI veterans only 50% think a true AGI will arrive before 2050 https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/

I'd be interested in a more recent poll but this was the most up to date that I could find.

EDIT: Found this from last year https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml-researchers-think-about-ai-in-2022

Looks like predictions haven't changed all that much, but there's still a wide range. Nobody really knows that's certain.

10

icedrift t1_j534dwy wrote

Does metaculus only poll people in the field and verify credentials or can anybody submit an estimate? If it's the latter, why take any stock in it? AI seems like one of those things that attracts a lot of fanatics who don't know what they're talking about.

Polls of industry veterans tend to hover around a 40% change of AGI by 2035

4

icedrift t1_j3u9rzf wrote

This guy's whole channel is really good https://www.youtube.com/@EdanMeyer/videos

It's less about the singularity and more about Machine Learning in General but unlike a lot of the sensationalized garbage you'll find on youtube, it's very educational.

If you're looking for something that's easier to digest Robert Miles is really good at breaking down AI allignment into funny entertaining videos https://www.youtube.com/@RobertMilesAI

3

icedrift t1_j3qyl1w wrote

I gotchu. First thing you need to do is learn Python. You don't need to be a master by any means but you should understand variables, expressions, functions, classes, packages/dependencies, file systems, and basic algebra. Run through this amazing book and you'll understand plenty to get into the ML side of things.

Once you know a bit of Python complete this course Practical Machine Learning for Coders. This is an extremely highly regarded modern crash course to machine learning that is bringing a lot new people into the industry. In the very first lesson you'll build an image classifier that didn't even exist 5 years ago.

As you go deeper and deeper Math becomes more important but CS isn't really necessary.

5