AI Ethics: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
| Line 2: | Line 2: | ||
*Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, 'the quest for truth' must be a lifelong pursuit. | *Most people are assholes, namely not fully developed intellectually, ethically, etc. See [[Theories of Human Development]]. Poor mental models of reality abound. This is a fundamental consequence of [[General Semantics]], which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, 'the quest for truth' must be a lifelong pursuit. | ||
*AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results. | *AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results. | ||
*For ethical AI, another tuoe of model must be proposed. Currently, [[ | *For ethical AI, another tuoe of model must be proposed. Currently, [[If Anyone Build It, Everyone Dies]] is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans. | ||
*From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn't have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values. | *From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn't have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values. | ||
Revision as of 23:40, 8 March 2026
- People do non share common values universally - thus wthics cannot be defined.
- Most people are assholes, namely not fully developed intellectually, ethically, etc. See Theories of Human Development. Poor mental models of reality abound. This is a fundamental consequence of General Semantics, which was proposed by Korzybski as a seminal work in civilization engineering. The problem is summarized: The map is not the territory. Thus, 'the quest for truth' must be a lifelong pursuit.
- AIs are trained on mass data which reflectsbthe human condition. Since humans are irrational,we cannot begin to claim that transformer-based LLMs will produce deterministic results.
- For ethical AI, another tuoe of model must be proposed. Currently, If Anyone Build It, Everyone Dies is Built, You Will Die]] is a radical but highly probable scenario (50% of AI researchers in 2026 think that ASI will kill humans. Thus, what would this new paradigm of intelligence look like? Or is a fundamental aspect or emergent behavior that which is known as will, thus alignment is impossible? Alignment must be well defined - as behavior of AI that aligns with humans.
- From a different perspective the alignment problem is a misnomer. Currently, a human value is domination and lacknof collaboration (if itbwasnt we wouldn't have poverty and war, broadly speaking. So from one perspective - ai is perfectly aligned with human thought of annihilation. Unless we evolve to higher values.