|
I dont think YOU understand how LLMs work They do not just regurgitate "the truth as society sees it" They can return sentences that are not in "their corpuses" by putting tokens in an order that was never seen when the corpuses were read in. So I have seen an LLM assure me of something that no one will have entered in a website or document that it has read. And then when I questioned it it confirmed that actually what it said was not correct. The fact that you talk about searching their corpuses again shows that you do not understand how LLMs work. If an LLM has read in the whole of a webpage, it is not stored in the LLM and the LLM does not search that text. The LLM is basically a complex function giving the probability of what the next word will be given the input. It uses the website to calculate the probabilities, but does not store it and does not search it. |