The day I stopped coding rules and started teaching machines

April 22, 20267 min readGerardo Perrucci

For the longest time, I thought being a good engineer meant I had to be the smartest person in the room. I thought my job was to anticipate every single edge case, write the perfect "if-else" block, and basically act as the brain for the software I was building. But lately, I have been looking at how we approach problems like predicting heart failure or identifying network intrusions, and I realized that my old way of thinking is actually a bottleneck.

I had this specific epiphany while looking at a dataset for cardiac health. If I were to write a traditional program to flag potential heart failure, I would have to sit down and decide exactly what the cutoff is for body mass index (BMI) or beats per minute (BPM). I would probably write something like "if BPM > 100 and BMI > 30, then flag as high risk." But who am I to decide that those specific numbers are the absolute threshold? Health is messier than a few hardcoded lines of logic.

That is where machine learning finally clicked for me. It is a complete inversion of how I usually work. Instead of me giving the computer a set of rules and some data to get an answer, I give it the data and the answers and let the computer figure out what the rules should be. It sounds simple when you say it like that, but it changes everything about how we build things.

Flipping the script on logic

In a traditional setup, we are the architects of the logic. We take our inputs and our rules, and we hope the output is correct. If the output is wrong, we go back and tweak the rules. We are constantly chasing our tails trying to model the complexity of the real world into a static algorithm.

Machine learning flips this on its head. We provide the historical data, the beats per minute, the age, the sex, and the actual known outcomes of whether those patients experienced heart failure. The "model" then looks at that pile of information and derives the parameters itself. It determines the weights and the connections that I, as a human programmer, might never see. It creates its own internal "if-then-else" logic based on patterns in the data rather than my own assumptions.

What I find most interesting is that these models are not just one-and-done scripts. They can be continuously trained. As more data comes in, the model gets better at spotting the subtle patterns that lead to a prediction. It moves from being a static tool to a living piece of software that evolves.

When we have the answers: Supervised learning

I started experimenting with how this actually looks in practice using simple classification tasks. The transcript mentioned a classic example, distinguishing between a bird and a cat. In a supervised learning environment, we act like a teacher. I would show the program a thousand pictures of birds and tag them as "bird." Then I would show it a thousand pictures of cats and tag them as "cat."

The model isn't looking for "pointy ears" because I told it to. It is looking at pixel patterns and mathematical gradients. When I show it a new image, it calculates a confidence score. It might say, "I am 95% sure this is a cat." If I give it more samples, that precision goes up. This is the bedrock of most of the AI tools we use today. We provide the labels, and the machine learns to map new inputs to those labels. It is incredibly effective for things like medical diagnosis where we have years of documented cases to learn from.

Finding the hidden clusters: Unsupervised learning

But what if we don't have labels? This is the part that used to confuse me. How can a machine learn if I don't tell it what it is looking at? This is called unsupervised learning, and it is honestly a bit more "magical" to watch.

Think about network traffic. If you are managing a massive corporate network, you have a constant stream of data moving back and forth. You might not know what a "malicious" packet looks like until it is too late. With unsupervised learning, you just feed the raw, unlabeled stream into the algorithm.

The machine starts grouping the data based on similarity. It notices that 99% of the traffic follows a specific cadence and volume. It creates a "cluster" for what it considers normal activity. Then, suddenly, it sees a data point that is way outside that cluster, an outlier. It doesn't need to know that the outlier is a "SQL injection attack" to tell you that something weird is happening. It just knows that this specific data point is fundamentally different from its neighbors. I’ve started using this for data exploration in my own projects. It is a great way to find patterns you didn't even know existed.

Learning through trial and error: Reinforcement learning

The third type of learning feels the most like how we actually learn as humans. It is called reinforcement learning. Instead of giving the machine labels or letting it find clusters, we give it a goal and a set of constraints.

I think about this like teaching a computer to play chess or navigate an obstacle course. We don't tell the computer "move the knight here." Instead, we say "the goal is to win the game, here are the legal moves, and you get a reward for capturing pieces or winning."

The algorithm starts making random moves. At first, it is terrible. It gets "punished" (mathematically) when it loses and "rewarded" when it succeeds. Over millions of iterations, it figures out which combinations of actions lead to the highest reward. It optimizes its behavior within the boundaries we set. It is a fascinating way to solve complex problems where the "right" answer isn't a single label, but a sequence of decisions.

Why this matters for us as engineers

Moving into machine learning doesn't mean we stop being engineers. It means we start acting more like curators and coaches. Instead of spending my time writing 5,000 lines of nested conditional logic, I spend my time ensuring the data we are using is clean, diverse, and representative of the real world.

The shift is from "How do I code this rule?" to "How do I help the machine discover this rule?" It requires a level of humility because we have to admit that the machine might find a better way to solve the problem than we could ever manually program.

I am currently looking into specific libraries like PyTorch to see how these theories actually translate into code. There is something incredibly rewarding about seeing a model I built accurately predict a result that I never explicitly told it how to find.

What I am diving into next

I am still just scratching the surface of this. Now that I understand the difference between supervised, unsupervised, and reinforcement learning, I want to see how they intersect. My next step is to actually build a small classification model for some local data I have been collecting.

I want to see the "confidence levels" for myself. I want to see where the model fails and try to understand why. If you are also transitioning from traditional programming to ML, I think the biggest hurdle is just letting go of that need to control every single "if" statement. Once you do that, the possibilities for what you can build become a lot bigger.

Gerardo Perrucci
Let's connect

Have questions about this topic?

Schedule a call to discuss how I can help with your project