It's seen as something big because what we observe in the responses implies a level of reasoning that is much greater than what we expected. It seems that operating in the language domain allows you to make use of that in a way that currently can't be done with other methods. The older systems you describe cannot do this and were therefore less interesting, although it's worth noting that plenty of current users prefer to operate their devices with text based inputs using Siri, Alexa or Google Assistant.
Frankly I feel the same way about diffusion models. It's nothing more than "whoa cool!" for me. After doing research in NLP in graduate school and doing NLP in the real world, I'm increasingly feeling a huge disconnect between academic research and the real world.
farmingvillein t1_j17n5n7 wrote
Was this written by an LLM?