_atswi_

_atswi_ OP t1_j9ukzlk wrote

That's a good point

What sounds like an open problem statement is how to get these LLMs to "quantify" that themselves the same way humans do. It's also interesting how that relates to the broader question of sentience and consciousness.

1