AI Hallucinations

From Open Source Ecology
Revision as of 13:43, 4 September 2025 by Eric (talk | contribs) (Saved Progress Mid-Long Edit)
Jump to navigation Jump to search

Basics

  • A Problem Sometimes Present in Generative Intelligence (AI/Machine Learning) Tools where they will fabricate results that are not based in reality
  • An intuitive analogy would be Signal Noise such as that when the ISO (Cameras) is cranked way up in a dark room leading to semi-rainbow noise pixels
    • The next “step up” in this would be something akin to that early-moderm AI project where they ran an Object Recognition algorithm in reverse, and it made some very “trippy” images etc
    • With Large Language Models this comparison starts to break down, but essentially think of that in reverse
  • Another POTENTIAL issues that doesn’t show up in all LLMs is that due to the Profit Motive of Companies such as Meta or Google etc, as with Social Media (which in a perfect world would be optimized for Ease of Use/Respecting User’s Time+Privacy+Mental Health etc, BUT due to heing run in a for-profit manner prioritize Ad Revenue and Engagement often stole divisiveness and addiction etc, while selling personal data) due to being run in a For-Profit Manner, they prioritize Engagement, and potentially in the Future, Profit (be it in User Data (for ads or for Training Data (AI) or via Branded Suggestions etc

Internal Links

External Links