From logic gates to language models
Recently I was watching my toddler son play with a copy of “Computer Engineering for Babies”. This interactive book has two buttons and a light bulb. Each page features a different fundamental logic gate, and you can press the buttons to explore how that logic gate behaves.
Just as my toddler doesn’t need to understand logic gates to play with the buttons and light, we all navigate a world full of abstractions we don’t fully comprehend. When you catch a ball, your brain performs complex calculations involving parabolic arcs and gravitational forces - yet you don’t consciously solve differential equations. You simply reach out and catch.
Underneath the hood, all our modern circuits are built on complex collections of primitive logic gates. We consider these primitive logic gates to be so basic that they make entertaining toys for children.
In the 1930s, Claude Shannon first showed how Boolean algebra could be implemented with electrical circuits, giving birth to the logic gates we still use today. By the 1950s, the invention of integrated circuits let engineers pack multiple gates onto a single chip, creating the first reusable building blocks of computation. The 1960s and 70s brought high-level programming languages that let programmers think in terms of human-readable instructions rather than binary code. Each step raised the level of abstraction, hiding more complexity beneath increasingly sophisticated interfaces.
Today, I’m a software engineer who stands on the shoulders of giants. Even though I work with computers every day, I cannot build anything complex out of these primitive logic gates. However, I can look up diagrams of complex circuits that were designed by other people. I benefit from decades of accumulated abstractions. For example, here’s how you can combine two primitive logic gates to make a “half adder” that adds two bits together, carrying over values just like we learned in elementary school arithmetic.
And this video shows how to build a more complete end to end circuit capable of adding up two 8 bit numbers:
The truth is, as cool as these circuits are, and as important as it is to learn about them, its not practical to think in terms of basic logic gates when you are trying to build something nontrivial. As a software engineer you must start thinking in terms of abstractions.
It starts with drawing a box around a couple logic gates and calling it a “half adder”. Then you draw a box around a larger circuit and call it an “8-bit adder”. This mirrors how our own minds develop abstractions. When my toddler was first learning to walk he had to consciously think about each muscle movement. Now ‘walking’ is a single mental command, with all the complexity hidden away in neural black boxes he trusts implicitly.
As we build magic boxes, we start out verifying that the box works, but then we trust the magic within it.
Today we have the most magical box yet. We started with a GPU chip onto which ultra precise machinery inscribed over 80 billion of those primitive logic gates, like magic runes. We gathered the totality of human knowledge, as expressed in language systems that are the product of billions of biological human minds over thousands of years. We cascaded our collected knowledge as tiny pulses of electricity across countless GPU chips, for a combined duration of thousands of years. Within this raging tempest of tamed lightning we constructed a seemingly magical oracle that can answer our questions.
Now I can open my IDE and ask Amazon Q Developer:
And here is the result:
See the code
<style>
.adder-container {
font-family: monospace;
margin: 20px 0;
}
.adder-input {
display: flex;
gap: 10px;
margin-bottom: 20px;
}
.adder-input input {
width: 60px;
}
.binary-display {
margin: 5px 0;
}
.carry {
color: #ff4444;
}
.result {
color: #4444ff;
}
.step {
margin: 2px 0;
}
</style>
<div class="adder-container">
<div class="adder-input">
<div>
<label for="num1">Number 1:</label>
<input type="number" id="num1" min="0" max="255" value="0">
</div>
<div>
<label for="num2">Number 2:</label>
<input type="number" id="num2" min="0" max="255" value="0">
</div>
</div>
<div id="visualization"></div>
</div>
<script>
function padBinary(num) {
return num.toString(2).padStart(8, '0');
}
function visualizeAddition(num1, num2) {
const binary1 = padBinary(num1);
const binary2 = padBinary(num2);
let carry = 0;
let result = '';
let steps = [];
let stepVisual = '';
for (let i = 7; i >= 0; i--) {
const bit1 = parseInt(binary1[i]);
const bit2 = parseInt(binary2[i]);
const sum = bit1 + bit2 + carry;
const newBit = sum % 2;
carry = Math.floor(sum / 2);
result = newBit + result;
stepVisual = `${bit1} + ${bit2}${carry ? ' + ' + carry : ''} = ${newBit}${carry ? ' (carry 1)' : ''}`;
steps.push({
position: i,
step: stepVisual,
carry: carry
});
}
const finalSum = num1 + num2;
let html = `
<div class="binary-display">Number 1: ${binary1} (${num1})</div>
<div class="binary-display">Number 2: ${binary2} (${num2})</div>
<div class="steps">
${steps.map(s => `
<div class="step">
Bit ${s.position}: ${s.step}
</div>
`).join('')}
</div>
<div class="binary-display result">Result: ${padBinary(finalSum)} (${finalSum})</div>
`;
document.getElementById('visualization').innerHTML = html;
}
document.getElementById('num1').addEventListener('input', function(e) {
const val = Math.min(255, Math.max(0, parseInt(this.value) || 0));
this.value = val;
visualizeAddition(val, parseInt(document.getElementById('num2').value) || 0);
});
document.getElementById('num2').addEventListener('input', function(e) {
const val = Math.min(255, Math.max(0, parseInt(this.value) || 0));
this.value = val;
visualizeAddition(parseInt(document.getElementById('num1').value) || 0, val);
});
// Initial visualization
visualizeAddition(0, 0);
</script>
Amazon Q Developer coded this little JavaScript widget and gave it back to me in about three seconds. I dropped the code into the page and played with it a bit to verify that it seemed correct. It would have taken me at least 30 minutes to find all the information and come up with an implementation for this. Instead I was able to treat my prompt as a “black box”. I can view the JavaScript and HTML code for this widget, but as far as I am concerned the source code for this widget is the prompt itself:
I need a JavaScript visualization of an 8 bit adder.
User should be able to enter two values that default to zero.
On update of the values, the adder should instantly calculate
the sum, no need to click a button to calculate.
It should show the binary of the two input numbers,
then a visualization of each iteration of the adder on a separate line
then the binary of the final value,
with a final decimal representation of the result.
Output result as styles and HTML that I can embed
directly in a markdown blogpost, no wrapping HTML page
Some people find this level of abstraction to be frightening.
But we deal with higher levels of abstraction all the time. In fact, this very article demonstrates extreme abstraction. The diagrams you see above are SVG files - collections of mathematical instructions that tell your browser how to draw shapes. I didn’t need to understand how the browser’s rendering engine works, or how it converts these abstract mathematical descriptions into pixels on your screen. I simply wrote (or in this case, asked an AI to write) XML-like tags that describe circles, lines, and text, trusting that the browser would in turn handle the complex work of turning those descriptions into visible images.
The same is true for the interactive 8-bit adder visualization. I don’t need to understand how JavaScript’s event loop works, or how the browser manages memory, or how it repaints the DOM when values change. I can work at a higher level of abstraction, thinking in terms of “when this input changes, update that output” and trust that the layers below will handle the details.
Even the way I got the code itself - through Amazon Q Developer - is a perfect example of working with abstractions. I didn’t need to understand the intricacies of natural language processing or how the AI model works. I just needed to write a clear prompt, and trust that the AI would handle the complexity of turning my high-level description into working code.
When I look at modern diagrams of AI workflows, it becomes clear that large language models will become just another magic box in the circuit diagram. There is a ghostly oracle in the circuits. We trust it, even though we may not understand fully how it works, but it has become just as foundational as our well known logic gates of the past.
I can’t wait to continue this journey of upwards abstraction, from logic gates, to language models, and beyond, building new layers of software on foundations we trust but may never fully understand.