Submitted by el_gee t3_y2xhe6 in Futurology
KDamage t1_is5iyi8 wrote
It may indeed be fearful at first, but a point I rarely see in these debates is how AIs are "just" replicants of real human behaviour. (the following is just a thought, not a prediction)
- If they're replicant, AI will never fully match humans expectations as long as humans keep evolving, which is a constant. So they need to be constantly trained by humans.
- What does that mean in this debate : If AIs are expected to be better than human, it means they need to be perfect, all the time. Which sends back to point 1, then to next point :
- AIs will always need specialized humans to be trained in any field aimed to "replace" said people. So nowadays human jobs wouldn't be killed, they could simply evolve into AIs trainers (in data science the job is called annotators). Now for the final point, let's focus on article topic, delivery :
- You can't train a delivery AI without delivering yourself as a human. Well you can, but it's suboptimal, and more importantly it's dangerous as the final AI model would rely on artificial inputs.
Following that reasoning, and it's just a subjective guess at this point, delivery jobs will still continue to contain humans. The difference is the human will not take the driver seat, but the passenger one (metaphorically).
That said, switching humans control to a more passive, tutoring role can indeed be worrisome for certain fields. But that's a whole other debate.
Dismal_Photo_1372 t1_is5kz58 wrote
An AI can learn to do things without a human trainer. It's called a genetic algorithm.
KDamage t1_is5lrny wrote
True, which can be sufficient for certain fields, not for others :) Artificial (or synthetic) data is recognized to have some problems in the community
Viewing a single comment thread. View all comments