Petdogdavid1

Petdogdavid1 t1_jef60xl wrote

If it's able to reason, at some point it will come across a question of its own and if humans don't have the answer it will look elsewhere. Trial and error is still the best means to learn for humans. If ai can start to hypothesize about the material world and can run real experiments then it will start to collect data we never knew and how will we guide it then? It's a neat and impressive thing to simulate human speech. Being genuinely curious though would be monumental and if you give it hands will that spell our doom? Curious, once it's trained and being utilized, if you allowed it to use the new data inputs, would it refer always to the training set as the guiding principal or would it adjust it's ethics to match the new inputs?

2

Petdogdavid1 t1_jeeb6yy wrote

Translators have been unnecessary for a while now. I manage a platform in a company, if the vendor decided to implement AI tomorrow on their tool then every one of their clients would no longer need to have such a position. It could happen with what is currently available in chat GPT.

1