I like this comment on Slashdot in the above link:
LLMs donât have an understanding of anything. They can only regurgitate derivations of what theyâve been trained on and canât apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.
So true.