Submitted by Ok-Variety-8135 t3_11c9zum in singularity
turnip_burrito t1_ja5y3hb wrote
Reply to comment by Nervous-Newt848 in Is multi-modal language model already AGI? by Ok-Variety-8135
I agree with all of this, but just to be a bit over-pedantic on one bit:
> Models cant speak or hear when they want to Its just not part of their programming.
As you said it's not part of their programming, in today's models. In general though, it wouldn't be too difficult to construct a new model that judges at each timestep based on both external stimuli and internal hidden states when to speak/interrupt or listen intently. Actually at first glance such a thing sounds trivial.
Viewing a single comment thread. View all comments