On the Impact of Stored Past Sessions in LLM Based Chat Bots

From Open Source Ecology
Jump to navigation Jump to search

Basics

  • Some chat bots such as ChatGPT “remember” what you have asked it/said in the past etc
  • This is SUPPOSEDLY done to make it better at anticipating your needs/faster
    • and to be fair google is doing this a bit too recently
      • Sometimes, for technical information etc it can be handy, essentially bypassing Disambiguation Pages / multiple rounds of making sure the google search is written right/subtracting unrelated words etc
      • Other times it gets in the way/preventing me from seeing what i need to etc (basically all use cases BUT looking up specific instruction manuals/specific stuff like that with keywords)
  • HOWEVER, this can also be used for Manipulation via Agreeability/Telling you what you WANT to hear, rather than what you need to hear/what is more true

Potential Remedies

Not Using LLM Chatbots Whenever Possible

  • Ie using them only when especially stumped as a “hey let’s see if it finds a paper?” Type solution
  • Hierarchy of Controls etc

Reset Every X Prompts/X Hours

  • Essentially how often should the data be purged to keep it from getting AI Brain Rot from recirculating LLM Answers into Prompts (that sort of feedback loop is the main issue, akin to Needs More JPEG etc

Internal Links

External Links