Doomer Optimism: Difference between revisions

From Open Source Ecology
Jump to navigation Jump to search
No edit summary
No edit summary
 
Line 7: Line 7:
Video:
Video:


<html> ________________________________ </html>           height=500 width=800
<html> <iframe width="560" height="315" src="https://www.youtube.com/embed/0y3jt2-IWgY?si=VxkPkBOBm8KV6hFu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> </html>      
 
[_______________________________________ edit]


Pertinent comment:
Pertinent comment:


There is a new twist: evil AI - the real risk of AGI and ASI killing off humans or destroying the planet. This does not invalidate the core principle that technology should be open. In the open source view, technology is neutral, though it does have affordances of good or evil: the real differentiator is how technology is used. It can be used for good, or evil. So OSE's current stand on x risk from AI is that AI should be open. Note that the evil bastard will typically develop the frontier: right now there is more funding for evil than for good. So the most lethal tech will be developed - and it will be proprietary. But I venture to say that the underlying tech should still be open. Because that same tech can be used to develop a remedy. For example, if now the US government is developing automated killing drones with AI, it does not mean that this AI should become restricted. Because the same AI can be developed for a remedy, such as 'AI automated killing drone defense drones' or whatever technology can counteract the evil. It seems to be a mistake to limit a technology: because evil bastards will continue their dominance if that technology is not accessible or hackable for good by agents of good. AI could lead to Humanity+ (human flourishing) or Humanity- (armageddon). AI source code should remain open, so that more good agents have a chance to develop positive solutions. To try to bottle up the AI genie just doesn't make sense to me. I am just learning about all of this, so my viewpoint could change tomorrow - but right now - I maintain that all technology should be open source. Because the opposite of open source (collaborative, open, abundance, stewardship-based behavior) is oppression, oligarchy, limited human development. I stand to be educated if I am missing some critical points here. I don't discount the notion that 'we must develop proof that we will not die' as a result of AI development - but such a policy is unenforceable or futile because Trump-Putin-Xi-Kim will continue regardless of treaty. So instead of the naive realism of bottling the AI genie - we must embrace this critter and work with it. So we nurture it towards good - by universal empowerment of people -  letting love rule the universe. That is a rigorous proposition, not a hippie ideal.
There is a new twist: evil AI - the real risk of AGI and ASI killing off humans or destroying the planet. This does not invalidate the core principle that technology should be open. In the open source view, technology is neutral, though it does have affordances of good or evil: the real differentiator is how technology is used. It can be used for good, or evil. So OSE's current stand on x risk from AI is that AI should be open. Note that the evil bastard will typically develop the frontier: right now there is more funding for evil than for good. So the most lethal tech will be developed - and it will be proprietary. But I venture to say that the underlying tech should still be open. Because that same tech can be used to develop a remedy. For example, if now the US government is developing automated killing drones with AI, it does not mean that this AI should become restricted. Because the same AI can be developed for a remedy, such as 'AI automated killing drone defense drones' or whatever technology can counteract the evil. It seems to be a mistake to limit a technology: because evil bastards will continue their dominance if that technology is not accessible or hackable for good by agents of good. AI could lead to Humanity+ (human flourishing) or Humanity- (armageddon). AI source code should remain open, so that more good agents have a chance to develop positive solutions. To try to bottle up the AI genie just doesn't make sense to me. I am just learning about all of this, so my viewpoint could change tomorrow - but right now - I maintain that all technology should be open source. Because the opposite of open source (collaborative, open, abundance, stewardship-based behavior) is oppression, oligarchy, limited human development. I stand to be educated if I am missing some critical points here. I don't discount the notion that 'we must develop proof that we will not die' as a result of AI development - but such a policy is unenforceable or futile because Trump-Putin-Xi-Kim will continue regardless of treaty. So instead of the naive realism of bottling the AI genie - we must embrace this critter and work with it. So we nurture it towards good - by universal empowerment of people -  letting love rule the universe. That is a rigorous proposition, not a hippie ideal.

Latest revision as of 00:13, 7 March 2026

Next event - https://luma.com/jkrbsh4r

Bottom line, can the Doomer Optimism movement be assessed as civilization grade development work related to Open Source Ecology?

  • Their strategy appears to be withdrawal rather than redesign. Doomer Optimism sits between collapse acceptance and small-scale reconstruction.
  • OSE believes , in turn, that Civilization-scale productivity should be achievable with open, modular, distributable industrial systems.

Video:

Pertinent comment:

There is a new twist: evil AI - the real risk of AGI and ASI killing off humans or destroying the planet. This does not invalidate the core principle that technology should be open. In the open source view, technology is neutral, though it does have affordances of good or evil: the real differentiator is how technology is used. It can be used for good, or evil. So OSE's current stand on x risk from AI is that AI should be open. Note that the evil bastard will typically develop the frontier: right now there is more funding for evil than for good. So the most lethal tech will be developed - and it will be proprietary. But I venture to say that the underlying tech should still be open. Because that same tech can be used to develop a remedy. For example, if now the US government is developing automated killing drones with AI, it does not mean that this AI should become restricted. Because the same AI can be developed for a remedy, such as 'AI automated killing drone defense drones' or whatever technology can counteract the evil. It seems to be a mistake to limit a technology: because evil bastards will continue their dominance if that technology is not accessible or hackable for good by agents of good. AI could lead to Humanity+ (human flourishing) or Humanity- (armageddon). AI source code should remain open, so that more good agents have a chance to develop positive solutions. To try to bottle up the AI genie just doesn't make sense to me. I am just learning about all of this, so my viewpoint could change tomorrow - but right now - I maintain that all technology should be open source. Because the opposite of open source (collaborative, open, abundance, stewardship-based behavior) is oppression, oligarchy, limited human development. I stand to be educated if I am missing some critical points here. I don't discount the notion that 'we must develop proof that we will not die' as a result of AI development - but such a policy is unenforceable or futile because Trump-Putin-Xi-Kim will continue regardless of treaty. So instead of the naive realism of bottling the AI genie - we must embrace this critter and work with it. So we nurture it towards good - by universal empowerment of people - letting love rule the universe. That is a rigorous proposition, not a hippie ideal.