AI Hallucinations
Jump to navigation
Jump to search
Basics
- A Problem Sometimes Present in Generative Intelligence (AI/Machine Learning) Tools where they will fabricate results that are not based in reality
- An intuitive analogy would be Signal Noise such as that when the ISO (Cameras) is cranked way up in a dark room leading to semi-rainbow noise pixels
- The next “step up” in this would be something akin to that early-moderm AI project where they ran an Object Recognition algorithm in reverse, and it made some very “trippy” images etc
- With Large Language Models this comparison starts to break down, but essentially think of that in reverse
- Another POTENTIAL issues that doesn’t show up in all LLMs is that due to the Profit Motive of Companies such as Meta or Google etc, as with Social Media (which in a perfect world would be optimized for Ease of Use/Respecting User’s Time+Privacy+Mental Health etc, BUT due to heing run in a for-profit manner prioritize Ad Revenue and Engagement often stole divisiveness and addiction etc, while selling personal data) due to being run in a For-Profit Manner, they prioritize Engagement, and potentially in the Future, Profit (be it in User Data (for ads or for Training Data (AI) or via Branded Suggestions etc
- While it isn’t all “Doom and Gloom” , Especially if a FOSS algorithm is/were made using ONLY approved Public Domain / Creative Commons training data etc with a Third Party HEAVILY scrutanized algorithm + users HEAVILY taking it’s results with a grain of salt + thurough Fact Checking / Media Literacy etc…that is not the case in the current growing Technofeudalist / Oligarchic / Kleptocratic Status Quo
Internal Links
- Face Hallucination
- A somewhat related phenomenon which has utility in Facial Recognition Algorithms , but also can be a phenomenon of an AI Algorithm “seeing” faces in Noise / Otherwise Non-Face Situations