AI Hallucinations

From Open Source Ecology
Jump to navigation Jump to search

Basics

  • A Problem Sometimes Present in Generative Intelligence (AI/Machine Learning) Tools where they will fabricate results that are not based in reality
  • An intuitive analogy would be Signal Noise such as that when the ISO (Cameras) is cranked way up in a dark room leading to semi-rainbow noise pixels
    • The next “step up” in this would be something akin to that early-moderm AI project where they ran an Object Recognition algorithm in reverse, and it made some very “trippy” images etc
    • With Large Language Models this comparison starts to break down, but essentially think of that in reverse
  • Another POTENTIAL issues that doesn’t show up in all LLMs is that due to the Profit Motive of Companies such as Meta or Google etc, as with Social Media (which in a perfect world would be optimized for Ease of Use/Respecting User’s Time+Privacy+Mental Health etc, BUT due to heing run in a for-profit manner prioritize Ad Revenue and Engagement often stole divisiveness and addiction etc, while selling personal data) due to being run in a For-Profit Manner, they prioritize Engagement, and potentially in the Future, Profit (be it in User Data (for ads or for Training Data (AI) or via Branded Suggestions etc

Situations/Examples

In Images

  • Faces on Non-Faces
  • Errors in Object Recognition
    • For example the “Dog, Pig, Loaf of Bread?” Error that was a plot point in the movie “Mitchells vs the Machines”
  • Trippy Spirals etc in Image Recognition Algorithms run in reverse
  • The common error of People in AI Generated Images having too many fingers etc
  • Errors in Symmetry/Context
    • Many AI Generated Images of Complex Machines cam even be photorealistic, but due to misunderstanding of the context of their design, may not be FUNCTIONAL or even logical in layout
      • Another example of this would be AI Generated Images of Drumsets
        • Due to all of the pipes/tubing inherent in there structure, this gives LLMs plenty of opportunities to “cross the streams” or do things such as “Object Clipping” etc

In Writing/“Research”

  • LLMs have been known to generate Fake Sources, Locations, or even Nonsensical Justifications
    • Granted this varies model by model, and with Human in the Loop integration AS WELL AS the previously mentioned (largely hypothetical) “ Ethical AI “ , this is an issue plauging current large scale utilization of the technology
  • As per a reference in the External Links section, this is a phenomenon that can aid in AI Generated Content Detection in spaces where it’s usage is monitored such as Wikipedia edits

or (Wartime) Misinformation Campaigns

Role of Bias

  • While the issue of “Social Media Moderation Bias” was LARGELY not based in reality
    • This can be disc in detail more elsewhere, but essentially Far Right commentators and such being banned is not the same as sensible conservatives being BANNED, also granted CITATION NEEDED, but the impact wasn’t that high

Internal Links

External Links