Morning, CEO!
Okay, so we’re all mildly terrified that AI is about to take over the world.
I’ve personally been practicing my “please spare me, robot overlord” speech for three years. I’ve also been analyzing my job to see if an AI could do it, and the results... were not comforting.
But this book, Artificial Unintelligence by Meredith Broussard, just walked in and told me to calm down.
Her big idea? We are wildly overestimating AI.
The stuff in Westworld is... not the stuff tech companies are actually building. What they’re building is, according to her, “artificial un-intelligence.”
1. Why AI Can’t Solve Society
We all suffer from what she calls “techno-chauvinism.”
This is the very modern belief that for any messy, complicated human problem, “there’s an app for that.”
I am so guilty of this. My personal finances are a mess? I download a budgeting app. (They are still a mess, but now it’s a categorized mess).
The book gives a killer example: getting textbooks to students.
Sounds simple, right? Just, like, build a database? Track the inventory?
Wrong.
In the United States, even in relatively affluent states like New York, Washington and Pennsylvania, kids are still showing up to class without textbooks.
Why? Because it’s not a tech problem.
It’s a human problem.
It’s a tangled mess of:
Private companies charging $115 for one math book.
Government budgets that only cover $30.
A distribution chain that relies on a school principal remembering to fill out a spreadsheet... which they don’t.
You can build the most beautiful, elegant database in the world.
But if the principal won’t enter the data, or the budget doesn’t exist, the AI is just... sitting there. Looking expensive.
Bill Gates tried to fix education with a standardized “Common Core” system. It was... not a wild success.
Here’s the takeaway: AI is great at “well-defined” engineering problems. Society is a “poorly-defined,” chaotic dumpster fire of human weirdness.
(This explains why I can’t get AI to organize my life. My life is not a well-defined problem.)
2. Why AI Can’t Even Solve (Some) Engineering
“Fine!” I yelled at the book. “We won’t solve society. But what about a pure engineering problem? Like self-driving cars!”
Yeah... about that.
You know the 5 Levels of self-driving?
Level 0 is you doing everything.
Level 5 is you napping in the back seat.
Right now, we are stuck. Hard stuck. At Level 2.
(Level 2 is “car drives, but you have to stare at it, ready to grab the wheel.”)
Why is Level 5 so impossible? Because AI can’t handle the unexpected.
AI learns from past data. It has no “common sense” for stuff it hasn’t seen a million times.
You, a human, know a plastic bag blowing across the road is fine, but a rock is bad.
The AI just sees “unidentified object.” PANIC. BRAKE.
The book lists real things Google’s cars found:
A person in an electric wheelchair... chasing a duck... in circles.
Try writing an algorithm for that.
This creates three huge problems.
1. Safety: A sticker on a stop sign might make the car blind. Snow? Rain? Forget it. A Tesla once famously mistook the side of a white truck for a “bright cloud.” That did not end well.
2. Ethics: The “Trolley Problem” is no longer a philosophy quiz. It’s code.
A car has to choose: swerve and hit a wall (bad for you, the driver) or continue and hit a group of schoolchildren (bad for... everyone else).
Mercedes already announced their cars will be programmed to save the driver first.
So... yeah. Their cars will choose to mow down the kids. Great. I feel super relaxed about this.
3. Economics: This is wild. You can use 2% of your data to train a car to handle 80% of driving situations.
But that last 20%—the weird stuff, the duck-chasing—takes 98% more data.
Who has that much data? Google.
This isn’t creating innovation. It’s creating a data monopoly.
3. The Part That Melted My Brain
This last part is about “fairness.”
In the US, an AI algorithm called COMPAS helps judges decide sentences. It predicts the likelihood a criminal will re-offend.
To be “fair,” the algorithm is not allowed to look at a person’s race.
And it seems to work! If a White person and a Black person both get a “7” (high-risk), their actual rate of re-offending is almost identical.
Fair, right?
...No.
A group called ProPublica dug deeper. They looked at all the people the algorithm was wrong about.
(The people who got a “high-risk” score but never actually re-offended.)
Among this group of “wrongly accused” people, Black individuals were twice as likely as White individuals to be falsely flagged.
Wait... what? How? The algorithm didn’t even see race!
This is where I had to lie down for a minute.
It’s a mathematical paradox.
Because the overall population of Black offenders has a higher re-offense rate (for a million systemic reasons AI doesn’t know about), the algorithm’s errors will mathematically fall harder on that group.
This is the core, unsolvable problem.
You can choose to have an algorithm that is “accurate” (a “7” always means a 60% risk).
OR you can choose to have an algorithm that is “fair” (errors are spread evenly across all races).
You literally cannot, mathematically, have both.
The AI just forces us to make a brutal choice about what “fairness” even means.
So, What’s the Point?
The point is that AI is not magic. It’s not a brilliant, all-knowing mind.
It’s a tool. A very powerful tool that just repeats the patterns it’s shown.
If the society that feeds it data is messy, biased, and full of human problems...
...the AI will just be a faster, more efficient, and scarier version of that same mess.
Technology can’t solve a human problem.
Only humans can.
Links:
https://meredithbroussard.com
https://www.amazon.com/Artificial-Unintelligence-Computers-Misunderstand-World/dp/0262038005












