<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.opensourceecology.org/index.php?action=history&amp;feed=atom&amp;title=AI_Ethics</id>
	<title>AI Ethics - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.opensourceecology.org/index.php?action=history&amp;feed=atom&amp;title=AI_Ethics"/>
	<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;action=history"/>
	<updated>2026-05-02T09:19:07Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.13</generator>
	<entry>
		<id>https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320926&amp;oldid=prev</id>
		<title>Marcin: /* Solutions */</title>
		<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320926&amp;oldid=prev"/>
		<updated>2026-03-08T23:55:08Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Solutions&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:55, 8 March 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l7&quot;&gt;Line 7:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 7:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=Solutions=&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=Solutions=&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Tool AI vs agent AI&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Tool AI vs agent AI&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;*[[Civilization-Scale Intelligence Systems]] - founded on a core of cooperation.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Marcin</name></author>
	</entry>
	<entry>
		<id>https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320925&amp;oldid=prev</id>
		<title>Marcin at 23:49, 8 March 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320925&amp;oldid=prev"/>
		<updated>2026-03-08T23:49:56Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:49, 8 March 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l4&quot;&gt;Line 4:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 4:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[If Anyone Builds It, Everyone Dies]]  is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[If Anyone Builds It, Everyone Dies]]  is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;=Solutions=&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;*Tool AI vs agent AI&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Marcin</name></author>
	</entry>
	<entry>
		<id>https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320923&amp;oldid=prev</id>
		<title>Marcin at 23:41, 8 March 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320923&amp;oldid=prev"/>
		<updated>2026-03-08T23:41:20Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:41, 8 March 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot;&gt;Line 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[If Anyone &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Build &lt;/del&gt;It, Everyone Dies]]  is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[If Anyone &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Builds &lt;/ins&gt;It, Everyone Dies]]  is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Marcin</name></author>
	</entry>
	<entry>
		<id>https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320922&amp;oldid=prev</id>
		<title>Marcin at 23:40, 8 March 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320922&amp;oldid=prev"/>
		<updated>2026-03-08T23:40:59Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:40, 8 March 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot;&gt;Line 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Ifbit &lt;/del&gt;is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;If Anyone Build It, Everyone Dies]]  &lt;/ins&gt;is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Marcin</name></author>
	</entry>
	<entry>
		<id>https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320921&amp;oldid=prev</id>
		<title>Marcin at 23:40, 8 March 2026</title>
		<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320921&amp;oldid=prev"/>
		<updated>2026-03-08T23:40:14Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:40, 8 March 2026&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot;&gt;Line 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[Ifbit is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*For ethical AI, another tuoe of model must be proposed. Currently, [[Ifbit is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans. &lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn&amp;#039;t have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Marcin</name></author>
	</entry>
	<entry>
		<id>https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320920&amp;oldid=prev</id>
		<title>Marcin: Created page with &quot;*People do non share common values universally - thus wthics cannot be defined. *Most people are assholes, namely not fully developed intellectually, ethically, etc. See Theories of Human Development. Poor mental models of reality abound. This is a fundamental consequence of General Semantics, which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &#039;the quest for truth&#039; must be...&quot;</title>
		<link rel="alternate" type="text/html" href="https://wiki.opensourceecology.org/index.php?title=AI_Ethics&amp;diff=320920&amp;oldid=prev"/>
		<updated>2026-03-08T23:36:28Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;*People do non share common values universally - thus wthics cannot be defined. *Most people are assholes, namely not fully developed intellectually, ethically, etc. See &lt;a href=&quot;/index.php?title=Theories_of_Human_Development&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;Theories of Human Development (page does not exist)&quot;&gt;Theories of Human Development&lt;/a&gt;. Poor mental models of reality abound. This is a fundamental consequence of &lt;a href=&quot;/wiki/General_Semantics&quot; title=&quot;General Semantics&quot;&gt;General Semantics&lt;/a&gt;, which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;*People do non share common values universally - thus wthics cannot be defined.&lt;br /&gt;
*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, &amp;#039;the quest for truth&amp;#039; must be a lifelong pursuit.&lt;br /&gt;
*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.&lt;br /&gt;
*For ethical AI, another tuoe of model must be proposed. Currently, [[Ifbit is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans.&lt;/div&gt;</summary>
		<author><name>Marcin</name></author>
	</entry>
</feed>