The ELIZA effect refers to our tendency to attribute human-like qualities, such as emotions, understanding, and consciousness, to computer programs or AI systems.
The term originated from the 1960s computer program called ELIZA, which was able to simulate conversation with people using simple pattern matching techniques. Users often felt that ELIZA truly understood them, even though it was only responding with pre-programmed phrases.
This propensity to anthropomorphise machines, ascribing them mental states and abilities they do not possess, can lead to over-attribution, where people mistakenly assume that AI systems have beliefs, intentions, or deep comprehension of the topics they discuss. The risks of this include misinformation, unhealthy emotional attachment, and harmful applications such as catfishing and fraud. The persistence of the ELIZA effect in the era of AI chatbots, such as GPT-4 and Bard, underscores the importance of educating people about the limitations of AI, developing tools to detect AI-generated content, and considering AI policy that regulates AI to prevent misuse and protect users.