It’s already been 12 years since Manfred Spitzer’s book „Digitale Demenz“ posited the theory that increased use of digital media could lead to mental deficits.1 The internet offers a wealth of information that we don’t necessarily need to memorize, as it’s quickly and readily available online. A study disproves the notion that the internet causes our brains to deteriorate – quite the opposite. While using smartphones and other digital devices to store data doesn’t necessarily lead to a decline in our memory performance, the reliability of external storage influences how well we remember information.2 Research suggests that digital forgetting can make room in the brain for new information.3
How one fills this space seems to be up to the individual. Nonetheless, I’ve observed that in private exchanges, sharing information without using a smartphone is often met with suspicion. The spoken words are quickly verified by the listener through a brief internet search. It feels somewhat like being a contestant on a quiz show waiting for the host to confirm the accuracy of one’s answer.
With the emergence of personal digital assistants like Rabbit R1 or Human Ai Pin4, which use a Large Language Model for Artificial Intelligence, unlike voice-controlled assistants like Siri or Alexa, this effect of verifying information in personal dialogues can be intensified. However, there’s evidently still the challenge of hallucinations.5
AI hallucinations describe the phenomenon where large language models or generative AI tools perceive patterns or objects that don’t exist for humans, resulting in outputs that are nonsensical or inaccurate. These hallucinations can arise from various factors like overfitting, bias in training data, and high model complexity. They can have serious implications for real-world applications, providing false results or contributing to the spread of misinformation, especially when rooted in biases.6
These hallucinations and the dissemination of false information can have more dangerous consequences than simply outsourcing information from one’s own mind to digital storage. And perhaps we could consider it polite in our behavior if the other person checks their own statements on validated platforms on to combat this digital hallucination.
- Link: Universitätsklinikum Ulm / Prof. Dr. med. Dr. phil. Manfred Spitzer ↩︎
- Link: Spiegel / Der Mythos von der digitalen Demenz ↩︎
- Link: SageJournals / Saving-Enhanced Memory: The Benefits of Saving on the Learning and Remembering of New Information ↩︎
- Link: TechCrunch / Rabbit’s R1 vs Humane’s Ai Pin — which had the best launch? ↩︎
- Link: youtube / Marquess Brownlee Rabbit R1: Barely Reviewable ↩︎
- Link: IBM / What are AI Hallucinations? ↩︎