Programming isn't just randomly putting instructions into a CPU. You "code" in a programming language, using certain constructs. You learn what these constructs are, and you believe that the resulting compiled/interpreted code will do what those constructs say.

But how can you be sure? After all, there are hundreds of programming languages who "say" they can do stuff, but you never really check the bytecode they generate by hand to figure out if they do what they say they do.

You could just create unit tests and be done with it, but remember, *testing can only prove the presence of bugs, not their absence*

But you can do this with Math. Programming languages are, in fact, **languages**. This means they have a syntax and semantics. There is a lot of theory behind the semantics of programming languages, and these deal with math.

For example, take this code from a generic programming language:

int a = 0;
int b = a + 2;
return b + 5;

Tell me, what does this piece of code return? You almost inmediately said 7, right? But how did you know? After all, those are just squiggles that get compiled into bytewords of 0s and 1s that get passed into a CPU, nobody said those are real mathematical functions. For instance, the compiler can always define "+" as substraction, so the result actually ends up being -7. Or maybe "int" is a new word for "Interesting String", and "+" is string concatenation, so the result is actually the string "025".

Here is where math comes in. That code above has both syntax and semantics. Syntax is basically what determines what the squiggles are. You could have "int a = 0;" or "Integer a := 0;" or "var a = 0". All of those are syntactic differences. The semantics is what is important. "int a = 0;" tells you that you have a **variable** with name "a", that has a **type** "int", which you **assign** to a value 0. In programming language theory, all these concepts are defined and their semantics tells you what you can do with them, and how. In this case, the type "int" tells you that any value that a variable of that type holds can be assumed to be a mathematical integer when you reason about the code. The language also has semantics for "+", which tells you that it is a **function** that represents the mathematical addition.

WIth this knowledge, you can safely **reason** about your program, and be 100% sure that the result will be the integer 7.

All of this was only possible because of the abstract math behind the design of the programming language that allowed you to reason about the code above. In fact, you can also apply "regular" math (e.g algebra) because of this (like we did above, by using mathematical integers and addition to figure out the result).

This is what I find fascinating about programming. It is basically the nexus between the beautiful and elegant world of mathematics, and the ugly and mechanical world of electronics. You need the former to be able to reason about things and make meaningful decisions, but you need the latter to actually **do** things in the real world.

It's like magic