relinquishing llms
I am giving it up. I have ran the experiment of using gen ai, and my conclusion is that it is an existential threat to creativity and critical thinking.
The feeding of real-time information through RAG still doesn’t help disinformation. So I am just going back to using Google to get information about topics. Although the one actual benefit that doesn’t outweigh to cons is that it does good in matching “concepts” of topics.
llms can’t do novel design
The killing of creativity is due to the fact that it can’t do new things that people haven’t done. It’s an age old critique that has been said about them.
Yes, they can design if you consider selecting from previous patterns of design whether that is in product, software, or systems design. The point is llms can’t imagine new designs, it can’t do the work of researching, it can’t do the work of theorizing, and surely not making it a real thing.
These companies are soon to run out of “good code” to even train their models on. Even worse, it is an understood phenomena of any statistical model that when you feed in synthetic data from it’s output, your model will “necessarily” become degenerate1. In the case of image generation model, it degrades to noise, and in the case of llms it might word for word give you the whole text document.
llms have no understanding of grammar2
There’s something more important than the simple understand of sentences or words. It’s knowing how to interpret those sentences, how to form more sentences, and subsequently understanding the “semantic content” of them.
In computers there is the ISA rule set, in linguistic we have parts of speech, and in formal logic there are rules of inference. This accounts for the fundamental difference between human’s and llms, even moreso than the “stochastic parrot” analogy. LLMs aren’t like proof programs such as Coq.