BenjaminJamesBush t1_j8c12it wrote
Reply to comment by bballerkt7 in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Technically this has always been true.
EducationalCicada t1_j8d5y9z wrote
Not if it's actually impossible.
BashsIash t1_j8djkk4 wrote
Can it be impossible? I'd assume it can't be impossible, otherwise we couldn't be intelligent in the first place.
cd_1999 t1_j8fmlej wrote
Have you heard of Searle's Chinese Room?
Some people (sorry I can't give you references off the top of my head) argue there's something special about the biological nervous system, so the material substrate is not irrelevant. (Sure you could reverse engineer the whole biological system, but that would probably take much longer).
[deleted] t1_j8fu4du wrote
[deleted]
pyepyepie t1_j8dvci2 wrote
I would have told you my opinion if I would know what is the definition of AGI xD
urbanfoh t1_j8elywk wrote
Isn't it almost certainly possible due to the universal approximation theorem?
Assuming consciousness is a function of external variables a large enough network with access to these variables should be able to approximate consciousness.
Viewing a single comment thread. View all comments