What if AI companies are using our prompts to create low-resolution models of our entire identities?
Submitted by roiseeker t3_111ropw in Futurology
It struck me that there could be a dark side to the advancement of AI.
What if all the information that AI companies (like OpenAI with ChatGPT) collect through prompts - detailed information about our lives, needs, wants, passions, and so on - is used to train an AI and create a model for each customer, which is then sold to the highest bidder? This would be similar to what Facebook did with their customer information, but it would be much more intrusive because AI would create a kind of "parrot" version of the customer. This version would be able to answer questions and try to predict what the ACTUAL you would say.
Being that such a system's accuracy will only get better, it could get really scary, really fast. What do you say about my crazy theory? Am I totally mad or is there a real possibility this might happen?
manicdee33 t1_j8gfb4x wrote
Well it's actually useful to have sims/agents that have more realistic personalities for things like modelling traffic flows or predicting crowd behaviour when seating or ingress/egress routes are changed.
Like, what if we were part of a simulation and each of us is really just a fragment of a personality of someone in the real world, and our purpose here is simply to figure out better strategies for surviving the heat death of the universe?