Human-AI Collaboration
- Principles - clarity is key
- If software is generated and works, save it. It will provide a clear reference for downstream work if AI forgets things. Digital code is the best reminder.
- Use graphs to clarify use cases, such as [1]
- AI can parse JPGs for text and diagrams - which is like having a smart expert derive semantics from what you show to them
- At all time, clear algorithm is key.
- Ports are key in schemas, instead of trying to describe how things attach verbally. Ports define connection method, regardless of or prior to any geometrical transformation
Alexa's Thoughts
Limitations of LLM "AI" models
- LLMs are trained on a limited dataset. Any information that is sought outside of that dataset will be unanswerable to the model.
- LLMs are trained on proprietary datasets. Unless you are running a local model that you have trained and know the contents of the training data, you will not know what the limitations of the model are.
- LLMs are prediction machines that operate on statistical principles alone. They do NOT reason, they cannot engineer, they cannot think, they cannot deliberate.
- LLMs (from the major AI companies) are trained on scraped data from the internet. They will come with every bias and prejudice that can be found publicly on the internet.
- LLMs often hallucinate or confabulate (fill in gaps in 'memory' with whatever sounds good).
- LLMs, having no reasoning ability, must be followed by a human for anything that is critical for health, safety, success, or accuracy.
My Thoughts
Large Language Models are one tool in the computer user's toolbox. Information provided by an LLM must be verified just like on a Wikipedia page, a random post on Reddit, or any other place where untrusted actors can contribute their opinions in the place of fact.
Where LLMs Excel, and how they can be put to use
Since AI models can be fed nearly limitless information/data, they can be incredibly fast at generating responses. An LLM can parse through 10,000+ software repositories much faster than a human can, which is why they can generally be useful in performing basic coding tasks or in producing large amounts of human-sounding writing in nearly an instant. LLMs are also okay at presenting creative or non-traditional answers to questions, which can be a useful aid in certain situations. Additionally, it can be difficult to find things that "you don't know you don't know", and LLMs may mention interesting topics in their output that warrant further reading. The area that LLMs most excel in are conversational/natural language input into a computer. They are making improvements in how productive voice commands are for a computer user.
Things to avoid
“A computer can never be held accountable, therefore a computer must never make a management decision.” – IBM Training Manual, 1979
Due to the lack of reasoning skills, the need for verification of critical information, and a general lack of understanding of the nuances of living in our world, LLMS should NOT perform certain tasks. Per the classic IBM quote above, LLMs are still computers. They're machines, unfeeling, unable to make moral decisions, and should not be trusted to be truthful. They shouldn't be used to engineer things (for fear of engilucinations) as they can make critical missteps in the process, and then who is to blame?
I personally take pride in my work, and even when the work is not as good as other people's work, at least I can sign my name to it and be proud of what I achieved in my endeavor of creation. If I give all of that away to a robot, what of me is left?