Jump to content

Talk:Misaligned goals in artificial intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This article is an example of how anthropomorphizing things is a problem.
We tend to do this to simplify a thing/situation so we can more easily understand/explain it.
The irony is, "Getting something wrong because it has been simplified until it lacks the required information to function properly." is the ACTUAL topic if the article.

  1. The goal of the article is to inform people about how improper goals, result in unpredictable/unintended results in computer programming.
  2. We do it using language that heavily implies, or directly states, that these algorithms have "intelligence", that they can/do "learn", and/or that they work in a similar("neural") way to the way we think.
  3. This language causes anyone not already aware of how iterative programming A works, to make incorrect, and wildly varying assumptions about what can/cannot be done.
  4. Therefore the article itself, by attempting to simplify things to be more easily understandable, is directly responsible for causing misunderstanding.

A I will not use the terms "Artificial Intelligence", "Machine Learning", or "Neural Network" because they are a core part of the problem.

67.186.150.159 (talk) 16:33, 21 November 2021 (UTC) Prophes0r[reply]