The early Wittgenstein wrote in ‘Tractatus Logico-Philosophicus’, “Whereof one cannot speak, thereof one must be silent.” By the end of his career, Wittgenstein had come to realise that humans could, and did, play far more “language-games” than just communicating facts.

The remarkable thing is not that ChatGPT writes sentences that are false, it is that it writes sentences that are true. Of course, this is just a happy accident. LLMs have no idea whether a sentence they write is true or false. They have no judgement. They put words in a grammatically satisfying order based on a complex soup of linear algebra. Sometimes the state of affairs that corresponds to those words attains and the sentence is true. Sometimes the state of affairs that corresponds to those words does not attain and the sentence is false. A lot of the time a sentence does not represent a state of affairs at all and is neither true nor false.

LLMs are a great tool for producing grammatically correct word sequences very fast indeed. There are lots of use cases for this that will transform society. Most of these use cases will combine the power of LLMs with human judgement and checks.

LLMs are a terrible tool for consistently producing truth. Criticising them for this is like criticising a lawn mower for not being able to compose music.

The future of Large Language Models is software that takes the speed and sheer productivity of the machine and combines it with the innovation and judgement of a human.