KDamage

KDamage t1_j5ya09t wrote

While I do agree, if we switch the role from getting results to training the model, the job of engineering the right prompt to get the right result is simply called an annotator (which is how AIs are currently refined).

So I'd say it would not be a short lived job, just a multi purpose one at first, slowly switching to model training over time. Just like how a lot of data scientists have slowly switched from model research to data cleaning. (the above is indeed a personal opinion, not a prediction)

More concisely, I think any job that relates to training/refine AIs, may it be by deeply annonating or extensively using it, is a job of the future.

example : a picture generation AI service, hiring an expert in Art History from the romanticism era to create high quality skills in that domain, with the right prompts and corrections in the shortest time. An AI teacher, to put it more simply.

example 2 : a very highly ranked competitive FPS gamer to play extensively with an AI, which would be included as a coop bot in an upcoming FPS.

The choice of the teacher could even be part of the company branding : Company A capitalizing on offering bots or NPCs "inspired by the famous ProGamer69 style", Company B "inspired by the even more famous Crusher69". etc

Right now we are at the infancy of AI creation, but I think we'll soon enter into the identity crafting era, with several products declining the same AI model source, upon different pre-trained sub-expertises. This will be very interesting.

3

KDamage t1_j5qha1p wrote

I see what you mean, which is true for the dataset. Are we sure OpenAI has not incorporated some sort of auto-annotator based on user interaction ? Like the kind of cleverbot where it was growing its dataset from user-to-bot conversation ? Modern chatbots all do this, which was feeding my assumption for chatGPT. There is some room for two models actually, one for the knowledge database, which has stopped training, one potential other one for the interaction, which is growing

−1

KDamage t1_j5plf7r wrote

While I get your point, Artificial Intelligence doesn't mean perfect (or humanly equal intelligence), it just means a relatively independant, artificially created form of intelligence. As in being able to decide its own direction, for what it is able to produce or to output, by itself. William Gibson for example likes to call the actual internet some kind of artificial intelligence, as it has an inertia on its own. Which is very different from classical scifi narrative.

On top of that, it is also the ability to learn by itself (Machine Learning should be the real name instead of AI, which is based on the tools, or algorythms, it has been given)

Around that concept, there are indeed varying degrees of autonomy, with its (abstract) tipping point being singularity. ChatGPT, Dall-E, etc are, technically, are organically growing, but for now their model is just in its infancy compared to what they'll become with time.

4

KDamage t1_ix3euni wrote

I indeed completely agree. Everything is already written in novels, some of them being not romanticized but simple societal evolving calculations and predictions, and I've yet to see one that did not depict exactly what you mentionned. Some people will be aware of the dangers you mentionned, but I'm pretty sure a lot will not, or will prefer to ignore it because "better pay". Just like some in the present times prefer to ignore burnout syndrome.

1

KDamage t1_ix3da1j wrote

We are really slowly turning into a transhumanist society. We're still at the infant step where tech is not yet merged with flesh, but smartphones and computers still make a huge part of our daily routines and decisions all day. On some extent, I've been imagining such a scenario for a long time where the tech-flesh barrier would break, like a lot of anticipation romancers wrote in the past.

And while my own conclusion would be to stay on the "non-augmented" side, I have a hard time imagining a world where a majority of people would not go transhumanism.

A simple example : a job position is open for a prestigious, very well paid job, requiring a neural implant for extended knowledge and efficiency. Something where the non-augmented could clearly not compete. Would there be absolutely zero candidates ? I think not. Then the tech become more and more mainstream, with more and more people adopting it as they see it only gives them an advantage over others and better wages.

I'm pretty confident there will be a point in time, and not that far in the future, where the well-known scenario from anticipation writers depicting a societal conflict between augmented and non-augmented will be a reality.

A brain scan to monitor and adjust a person for better mood, better efficiency, is not that very different from the above. From the article the narrative sounds horrifying because it's an employer decision, but what about the moment where it's a candidate decision.

1

KDamage t1_iw27q3i wrote

The "robot as an avatar" part is something really interesting I've never thought about as yes, like most of us, the first thing that comes to mind is personal care.

But if we think about it, a true personal assistant would have a lot of wonderful uses : you would "send" your insanely beautiful looking android (beautiful can be anything, from gorgeous to super stylish), packed with a refined communication module, as a representative for all basic social tasks.

Search for a better job, negociate a better price for a furniture or acquisition, meet real estate agencies, team leading assistant at work, be a top host for your receptions at home, etc. I can already see a lot of leverage for people who either don't have the time or adequate skills for all of these.

Basically it would be the equivalent of having a multi-field personal agent doubled with a high representative value. I'm not sure I would be the target audience but there's indeed a booming market waiting to happen. And of course inducing some weird situations like android-to-android negociations, but whatever. Some people could clearly benefit a lot from it. Social exchange privileges wouldn't be reserved to attractive or overly skilled people anymore. It would be available to the masses.

Strange but interesting days ahead indeed. Also yes, I've played Detroit Become Human and really loved the questions it raised.

1

KDamage t1_is5iyi8 wrote

It may indeed be fearful at first, but a point I rarely see in these debates is how AIs are "just" replicants of real human behaviour. (the following is just a thought, not a prediction)

  1. If they're replicant, AI will never fully match humans expectations as long as humans keep evolving, which is a constant. So they need to be constantly trained by humans.
  2. What does that mean in this debate : If AIs are expected to be better than human, it means they need to be perfect, all the time. Which sends back to point 1, then to next point :
  3. AIs will always need specialized humans to be trained in any field aimed to "replace" said people. So nowadays human jobs wouldn't be killed, they could simply evolve into AIs trainers (in data science the job is called annotators). Now for the final point, let's focus on article topic, delivery :
  4. You can't train a delivery AI without delivering yourself as a human. Well you can, but it's suboptimal, and more importantly it's dangerous as the final AI model would rely on artificial inputs.

Following that reasoning, and it's just a subjective guess at this point, delivery jobs will still continue to contain humans. The difference is the human will not take the driver seat, but the passenger one (metaphorically).

That said, switching humans control to a more passive, tutoring role can indeed be worrisome for certain fields. But that's a whole other debate.

2

KDamage t1_ir21mul wrote

curation based on insane levels of personlization, precisely (think brain signal levels of personlization). Indeed such levels of data collection can, and will be used badly for some companies, but there will be so many types of personalizations commercially available (as many as there will be types of AIs) that I think we will have the choice between toxic ones and extremely satisfying ones.

2

KDamage t1_ir1wj2i wrote

Completely agree. It would take a lot of posts for me to explain in details, but I think AI, when it will be far more used in our personal digital recommandations (curating, filtering based on each individual sensibilities) will fix this. We're still in the industrial, massive content, non curated content age. Human moderation doesn't fit anymore.

1

KDamage t1_ir0oahy wrote

It's right, but two details are missing :

  • Tiktok, like every social platform lead by an algo, starts your suggestion algo with a "user profile estimation", as at first it doesn't know you. I made the experience, and for 40 minutes straight on my new (and first) Tiktok account, I couldn't stop receiving videos about fights, guns, military, while I'm absolutely not being interested in these topics and never really watched vids about it in other apps. It just estimated that I should be interested in it because of other people in my age, ethnicity and region. Question is : Why focusing on such violent topics ? Not all my demographic is focused on it, there are an infinite amount of other potential centers of interest.

  • re : because you're watching it. A classic human behaviour is to stare into the abyss, instead of looking elsewhere. That's even how most user engagement methods actually work to keep attention, even if the topic is undesired (news mainly). Second question is : It's a very well known syndrome, why most social medias keep capitalizing on it while there are tons of studies correlating social media use with depression ? They should be held responsible of what they deliver, not their users more primitive brain regions. It's like saying to someone suffering depression "Well just stop being depressed".

6