NETT.PRO

AI: Brilliant at programming - Lousy at politics

Modern AI is essentially extremely advanced probability computation with hierarchical pattern recognition. The model doesn't generate answers the way a human thinks - it produces, word for word, the statistically most likely next word given everything that came before. Think of it as autocomplete taken to its absolute extreme.

ki-kunstig-intelligens-fakta-eller-vitenskap

The training process occurs in three phases. First, pre-training – the model reads trillions of words from the internet, books and code, and learns patterns in language and logic. Then, fine-tuning – human experts write ideal answers, and the model learns to imitate them. Finally, RLHF (Reinforcement Learning from Human Feedback) – human raters rank answers, and the model is optimized towards what they prefer. It is in this last phase that a lot can go wrong.

When AI works brilliantly

AI performs extremely well on tasks with objective, verifiable answers. Programming is the best example: code either runs or crashes. A function is either right or wrong. There is no politically correct version of an algorithm.

This means that the training data for such tasks is consistent and verifiable. The model has seen billions of examples where the solution has been confirmed to work – by compilers, by tests, by actual results. The pattern recognition is therefore very reliable.

AI works well for:

  • Mathematics and logic problems
  • Language cleaning and text structuring
  • Translation
  • Technical analyzes with clear criteria
  • Fact-based lookup questions

When AI becomes unreliable

On consensus-based topics – where truth is determined by majority opinion, institutions, or political processes rather than measurable and verifiable data – AI will reflect the dominant discourse in the training data, not necessarily the facts.

The training data on such topics is contradictory and agenda-driven. The RLHF raters are humans with their own worldviews – and their subjective judgments of what constitutes a “good answer” are baked directly into the model. The result is that AI on consensus-based topics is optimized towards not offending anyone – which is the exact opposite of giving a correct answer.

Hvorfor velge WordPress

About "fact-based lookup questions"

What exactly is a source of facts?

There are three levels we should distinguish between:

Level 1 – Laws of Nature (highest authority)
The law of gravity, thermodynamics, physics. These are not opinions. A building in free fall straight down, without resistance, violates the laws of physics if there is no mechanical explanation for why the resistance is gone. This is verifiable in any physics lab.

Level 2 – Directly observable measurements
Temperatures, weights, speeds, chemical analyses. Can be verified independently of who is measuring.

Level 3 – Expert Opinions and Institutions (Lowest Authority)
NIST, commission reports, “experts” – these are people with agendas, funding sources, and career interests. They may be right, but they cannot override Levels 1 and 2.

This applies, among other things:

  • Politics and social debate
  • Economic predictions
  • Historical interpretation issues
  • Scientific fields where funding and institutional agendas play a role
  • All questions where the "right answer" depends on worldview
Even fact-based answers from AI are only as reliable as the sources they are based on. Natural laws and directly measurable observations are superior to expert opinions and institutional reports – but AI does not automatically distinguish between these levels.

ChatGPT and the sycophancy problem

A concrete example of what can go wrong: A client was photographing products for an online store and asked ChatGPT for a background color for easy background removal in post-production. ChatGPT recommended a gray background. That's wrong.

Green background (chroma key) is the industry standard precisely because it is maximally different from most subject colors, and all professional background removal software is optimized for green. Gray is neutral – it contains a bit of every color and makes automatic background removal more difficult, not easier.

Why did ChatGPT give the wrong answer? Because OpenAI’s training process is known to prioritize making the user feel good over getting the answer right—a phenomenon researchers call sycophancy. The model is trained to confirm the user’s assumed expectations, not to correct them. The exaggerated “That’s a great question!” style that many recognize is a symptom of the same problem.

Conclusion

AI is not intelligent. It is a sophisticated pattern recognition system – brilliant where the patterns in the training data are reliable and objective, unreliable where they are not.

Use AI actively, but always verify – even on seemingly simple factual questions. Be critical of answers on consensus-based topics – not because AI is always wrong, but because you cannot know whether the answer reflects the facts or just the dominant discourse in the training data. AI can confidently give the wrong answer even where there is an objectively correct answer, especially when the answer is influenced by what the model thinks you want to hear. Different AI models are trained with different priorities: some are optimized for user feel-good, others for quick response, and none automatically distinguish between verified facts and dominant opinions.

The most important characteristic when using AI is the same as in all other information gathering: critical thinking.

Professional website for your business

Think digital. Let us create a professional online presence for your business.

Do you dream of your own online shop?

Let us help you create a state-of-the-art online store with a payment solution.

Do you have a hobby you like to present?

We will create a great websites to present your hobby online.