Automation Bias

From Open Source Ecology
Jump to navigation Jump to search

Basics

  • The biases people have towards trusting a machine or automated result more than alternate opinions
  • From Wikipedia:
    • ”Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.”

Relevance In System Design

Relevance in Protocols/Safety

Relevance in Regards to Large Language Models / AI

  • Will need to properly write this up latet, but much of the inspiration was from the video The First Chat GPT Poisoning by the YouTube Channel “Medlife Crisis” (A Cardiologist who does some content on current health events etc)
    • Had interesting points on culture of seeking tests before seeking interviews of the patient etc (minor detail though)
    • Also mentioned Vibe Coding (although this deserves it’s own page)
    • It wasn’t mentioned in the video, but as with Universal Paperclips or the Parable of the AI Roomba etc, the danger may not be some hyperintelligent AI villian, but rather one that knows enough to be dangerous. Not providing context/understanding the full question etc
      • Need to look into it further, but some LLMs supposedly have a bias to agree with you/go along with ideas etc (partially tied to profit motive etc) so that is an aspect as well
      • Also Bias in LLM Training Datasets is a whole other can of worms, but boils down to Trash in, Trash Out

Examples

Internal Links

External Links