Automation Bias
Jump to navigation
Jump to search
Basics
- The biases people have towards trusting a machine or automated result more than alternate opinions
- From Wikipedia:
- ”Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.”
Relevance In System Design
Relevance in Protocols/Safety
- Electronic Meters such as 5 Gas Meters are prone to error, especially if not taken care of + frequently Bump Tested etc
- While if it is giving a False Positive one should treat it as valid etc ( Overabundance of Caution Principle etc), believing the meter is infallible should be avoided as in some situations, a False Negative could occur with deadly consequences
- It’s a bit more mundane for this example, but non-expired PH Strips can be more accurate than Electronic PH Testers
Relevance in Regards to Large Language Models / AI
- Will need to properly write this up latet, but much of the inspiration was from the video The First Chat GPT Poisoning by the YouTube Channel “Medlife Crisis” (A Cardiologist who does some content on current health events etc)
- Had interesting points on culture of seeking tests before seeking interviews of the patient etc (minor detail though)
- Also mentioned Vibe Coding (although this deserves it’s own page)
- It wasn’t mentioned in the video, but as with Universal Paperclips or the Parable of the AI Roomba etc, the danger may not be some hyperintelligent AI villian, but rather one that knows enough to be dangerous. Not providing context/understanding the full question etc
- Need to look into it further, but some LLMs supposedly have a bias to agree with you/go along with ideas etc (partially tied to profit motive etc) so that is an aspect as well
- Also Bias in LLM Training Datasets is a whole other can of worms, but boils down to Trash in, Trash Out
Examples
- False Positives with testing equipment