Elephants, Systems Thinking, and AI

How do you eat an elephant?


One bite at a time!


Most people know this old adage and its standard interpretation. I learned a somewhat different explanation years ago from an engineer / coworker. His take was the following: if you ever find yourself working on a problem and you can’t immediately see a path to a solution, break the problem down into smaller problems. It’s a little like the contrapositive of the original: if you’re failing to chew enough and swallow, you should have taken a smaller bite.


This is a great lesson for all math and physics problems, and perhaps for all lessons of life. If a problem is big, break it into pieces. Continue until all the pieces are easily solvable / chewed and swallowed.


Large engineering projects have used this logic since the beginning of time, whether it was building the pyramids or putting a man on the moon. In modern times we are surrounded by problems solved in this fashion, the best examples being our phones and laptops. A laptop has software built from millions of lines of code, but no one wrote all that code, the code is in blocks of a hundred lines, each chunk of which is straightforward. Each software developer depends on many blocks of code written by others that they incorporate into their portion of the whole. And no developer understands all the SW from top to bottom. Similarly, the main chips that execute the SW are made from billions of transistors, but all divided into blocks that perform simpler  and easier to define operations.


People who work regular jobs have no need to understand this complexity, and in fact have no life experience to help them understand. Even the engineers involved in a project can lose track of how their own small piece contributes to the final complete creation.


The problem in seeing the full and final system are particularly apparent in modern ‘AI’ projects. It is often reported, usually quite breathlessly by tech pundits and journalists, that “even the engineers I interviewed didn’t understand what the AI does!” Well of course they didn’t! Everyone involved had a very limited view of the whole. Even the CEOs and technical architects can not possibly comprehend all the details of implementation. When the project is initially divided up, only the problem to be solved is defined, not a method of implementation. Contributors at all levels can make decisions that ripple through other parts of the system and have unintended consequences. This is why so many test flights are required before a new airplane goes into service, or a new rocket flies. 


When an ‘AI’ program provides surprise answers, it is not evidence of “emergent” behavior, or super intelligence. It is more likely just the unintended consequence of a decision made deep in the design and implementation process. At the bottom level, the computer / SW / AI is just following very precise directions. Did the AI come across a false fact somewhere on the internet during its training / scan of available info? Most likely yes. Does it have any human-like ability to understand, interpret, or eliminate nonsense? No.