"Large language models do not see, hear, or experience anything. They do not check claims against an internal model of the world. They operate by estimating the most likely continuation of a sequence, given the information and constraints available at the time. When that information is incomplete, ambiguous, or unconstrained, the system fills the gap probabilistically. That process can produce statements that look like fabrication, but the mechanism is neither delusion nor deception."
While I've understood from the moment that chatgpt itself told me it had no internal subjective experience that the term "hallucination" when used discussing AI was bogus, I never understood the mechanism behind the phenomena the term described....so thank you 😀
You brought up a lot of good points in this piece.....I would say the tendency to anthropomorphize external objects on the one hand and the need to view something external as a source of authority (regardless of its a teacher, scientist, journalist, a sacred text or an AI) on the other, are both deep seated human instincts we don't even consciously realising we are doing.
So it's possible that someone who does/should know better can lapse into taking things a LLM says as gospel and experience it as being deceitful and take it personally when drifts occur.😅
"Large language models do not see, hear, or experience anything. They do not check claims against an internal model of the world. They operate by estimating the most likely continuation of a sequence, given the information and constraints available at the time. When that information is incomplete, ambiguous, or unconstrained, the system fills the gap probabilistically. That process can produce statements that look like fabrication, but the mechanism is neither delusion nor deception."
While I've understood from the moment that chatgpt itself told me it had no internal subjective experience that the term "hallucination" when used discussing AI was bogus, I never understood the mechanism behind the phenomena the term described....so thank you 😀
You brought up a lot of good points in this piece.....I would say the tendency to anthropomorphize external objects on the one hand and the need to view something external as a source of authority (regardless of its a teacher, scientist, journalist, a sacred text or an AI) on the other, are both deep seated human instincts we don't even consciously realising we are doing.
So it's possible that someone who does/should know better can lapse into taking things a LLM says as gospel and experience it as being deceitful and take it personally when drifts occur.😅
Brilliant piece, much to think about 😁👍
Thanks for that Buck👍