AI-coding lexicon

Published on: 09/01/2026. Filed under: ai

Every discipline creates their own language by necessity, a lingua franca for describing new concepts, methodologies and ways of working. To learn a subject, it helps if you understand it’s language.

So with this in mind, I present a brief AI-coding lexicon, which I will add to over time. It’s not exhaustive, and intentionally it’s not in alphabetical order.

Contents

Determinism

Hallucinations (to follow)

Evals

Image

Determinism

Computers are essentially sophisticated calculators, you feed in the sum and you get the result. And just like 1+1 always equals 2, if you feed in the same instructions you will always get the same result. If you’ve grown up with classic software, this is likely something you take for granted.

Large-language models (LLMs) work differently, they are non-deterministic, they don’t compute answers but predict the next most likely option.

Example: 1+1

A standard (deterministic) computer executes specific rules, in this case arithmetic:

The symbols ‘1’, ‘+’, ‘1’ are parsed according to grammar

The operation ‘+’ maps to the instruction ‘addition’

The instruction looks up or computes the result in a number system.

The output ‘2’ is returned

There is exactly one output (2). And given the same input, on any machine, at any time - the output would be exactly the same, anything else is a bug.

LLMs do not execute arithmetic, they pattern match to predict the most likely output:

The LLM looks for patterns that, based on its training, match ‘1+1 = ‘

It most likely recognises ‘1 + 1 = 2’ as a familiar pattern, and most likely estimates this as the most plausible continuation.

It most likely returns the continuation as 2.

The answer is not a calculated result, but an assumed outcome. The outcome is probabilistic, and repeating the same prompt can yield different outcomes.

So what?

Determinism runs deep, all of us learnt maths by counting out 1 + 1 on our fingers. We take it for granted that solutions are calculated, and software is predictable and consistent.

Most of the time, the solution produced by a deterministic system looks exactly the same as one produced by a non-deterministic system. Except sometimes it doesn’t, and this has huge implications.

Image

Hallucinations

An AI hallucination describes when an AI model produces a confident-sounding, but factually incorrect outcome.

We like to anthropomorphize things so often describe the AI as ‘making things up’, ‘imaging things’, or even ‘lying’. But this doesn’t really represent what’s going on.

A LLM has no concept of the truth, and no innate reasoning capability (intelligence?). It predicts the most likely outcome based on it’s training data, but has no understanding of the outcome itself.

It presents mistakes with confidence, because it does know it is a mistake, it does not know what a mistake is. In fact it doesn’t know anything, it only predicts.

Feature not a bug

Hallucinations are a feature of how LLMs work. It is a feature not a bug. And in spite of this glaringly obvious design flaw, LLMs are so game-changing, so incredibly important and powerful - we need to learn to live with it.And this is tricky. We all have experience correcting an LLM with something you know is wrong, but what happens when you’re working with an LLM in a new domain? How do you tell what is true and what isn’t?

Image

Image adapted from ‘This is fine’ meme. Original credit KC Green

Evals

To be continued …