Back to Blog List

My Two Cents on AI

Published | 3 min read

So I’ve been in this career for around two years now, and if you’d ask me if there’s a word I heared the most, it’s definitly AI. If there’s a topic that intrigued me the most, filled me with many questions, lots of whats, ifs, what ifs and maybes. And I think I’m not the only one here. A lot of question, fears, a lot of uncertainty. As a believer of the value of writing, I decided to write, for myself even before others.

Don’t get me wrong, this is not a technical article. It’s just my train of thoughts.

Okay, that’ll do as an introduction. Now, I want you to get back in time with me a couple of thousands years ago, back when when humans invented (discovered?) mathematics. I asked myself, why? Why did we need maths? What was so pressing, so essential, that we had to come up with this strange language of numbers and rules? Keep this question in mind.

Fast-forward to a couple of thousands of years after, we had to invent a programming language? Again, why? For me, it’s the same answer both times: Determinism. We, humans, seem to have this deep desire to control things, to define them, to make sure that if we do X, we always get Y. No surprises. Predictability. Logic. If I drop a rock, it falls. If I run this code, it gives me that output. We want to live in a world where inputs lead to expected outputs. And both maths and programming were tools we created to bring that kind of structure into our chaotic reality.

But now, enter LLMs.

And here’s the twist that kind of blows my mind: Large Language Models (LLMs) are, in a way, non-deterministic compilers from natural language to code. I type something that looks like a task, a question, or a vague idea… and it gives me code. Or an answer. Or sometimes, a hallucination.

It’s helpful, sure. Magical, even. But let’s be honest, it’s also unpredictable.. undeterministic. I can prompt it the same way twice and get slightly different results. Or sometimes very different. The idea of one correct answer doesn’t always hold.

And that defies the very reason we invented maths and programming in the first place.

We wanted to understand. LLMs? They often give us results without understanding. We wanted repeatability. LLMs? Sometimes hit, sometimes miss.

This brings me to something I once read from Wittgenstein, my favorite philosopher btw, his picture theory of language. He suggested that language is like a map, a picture of the world. That when we say something meaningful, it’s because there’s a structure, a logic, between the words that mirrors the structure of reality.

The thing is Wittgenstein was wrong, he himself changed his mind. Our language is not a picture of the world, it’s a part of a game, it’s all about the use, the context, our language, the human language is by neccessity undeterministic. Ruled by context, time, knowldege, everything. Again that’s why we did Maths, that’s why we did Programming. It’s rigid, rule-bound, and exact. That’s why we trust it to control planes, rockets, banks.

And that’s where it gets tricky with LLMs. We’re using a language (natural language) that was never designed to be deterministic, and feeding it to a machine to generate outputs we expect to behave deterministically.

That’s my two cents. Or maybe just my first draft of thinking out loud.