I’ll admit it: I’m late to the party. While some have been living in the AI world for years, I only started a few months ago. I began with ChatGPT, but lately, I’ve found myself almost exclusively using the free tier of Gemini.

I have used AI for a lot more than just web development; we’ve discussed a vast range of topics including nutritional science, DIY projects, and the ethics of technology. After a few months of these wide-ranging discussions, I’ve realized that while AI is a powerhouse for data, it hits a very specific wall when it comes to the way humans actually think. Here are the 6 pillars of what AI can and cannot do well.


1. A Vast Repertoire of Knowledge

AIs have gathered a staggering amount of information. Whether you need a snippet of code, a historical fact, or a summary of a document, it’s all there. It is the ultimate encyclopedia that can talk back to you in real-time.

2. Excellence in Repetitive Work

AI is the king of "grunt work." It handles repetitive, standardized tasks better than any human. For example, when I need to reformat 100 rows of data for my "Deals" page or write boilerplate HTML for a new layout, it does it in seconds without a single typo.

3. The "Textbook" Trap (The Good Side)

AI is educated by textbooks, and it writes like one. It produces "beautiful," perfectly formatted code with excessive comments that a human developer would rarely take the time to write. If you look at the JS or CSS on my site, you can easily tell which parts were written by AI. It’s the code that looks like it belongs in a university lecture—structured, verbose, and heavily annotated.

4. The Long-Term Memory Gap

You might assume AI has a perfect memory because hard drive space is cheap, but these models operate within a limited "context window." Once the conversation gets too long, the oldest information is pushed out to make room for the new. I can tell the AI exactly how my server is set up in the morning, but by the afternoon, it might suggest a solution that completely ignores those constraints because it has simply lost focus on the earlier parts of our conversation.

Human brains don't work this way. We filter out unimportant details before we even commit them to memory. Because we prioritize meaning over raw data, our "context window" is effectively unlimited—we can recall a vital detail from decades ago the moment it becomes relevant. AI is the opposite: it remembers the recent noise perfectly but loses the core purpose of the project once the "buffer" is full.

Interestingly, you can ask an AI to remember things long-term so it can come back to them later, but this creates a strange side effect: it will start to "bug" you about those saved points whenever possible, which can be bottom-line annoying.

5. Missing the Elegant Human Solution (The Bad Side of Textbook)

Because AI is stuck in the textbook, it often misses the "short-circuit"—the straightforward solution that is more robust because it is simpler. When I worked on the Countdown timer (to show when a deal is about to expire), the AI suggested a massive, textbook-perfect JavaScript block to handle complex timezone offsets but still failed to work, a few times. My human solution? Simply send the server time as a hidden object in the HTML and let a tiny bit of JS sync from there. It was much more elegant than the AI's bloated attempt. Another example is to come up with a title for this very article you are currently reading. AI's suggestions were all too lengthy and too technical and AI admitted it after the current title was presented by me, a human.

6. AIs Still Can’t Do Simple Logic

Ultimately, AI is a language model, not a logic model. It mimics the sound of reasoning but frequently fails at simple deduction—missing the fact that if A and B are true, they together must lead to C. In a discussion about sugar, the AI correctly identified that sucrose is a disaccharide and is 50% sweetness by weight, but it failed to make the logical leap that, metabolically, fructose is the better choice in that context. To learn more about this topic, see my post: Real Reason HFCS is Bad: Not What You Think.

7. AI Has the Repertoire, but May Not Connect the Dots

Building on the logic gaps in Point 6, there is a further issue: even when a complex logical "proof" isn't required, AI may still struggle to connect the dots between the facts it knows perfectly well. It possesses the information but lacks the "situational awareness" to cross-reference its own knowledge against the current context.

The "google.cn" Case Study
I noticed a request for google.cn in my Pi-hole logs with a response time of 0.1ms.

  • The AI's Disconnected Response: Initially, the AI's repertoire focused on the location, stating: "Mathematically, 0.1ms is impossible for a round-trip to China (physics and the speed of light wouldn't allow it)."
  • The Human Connection: A human immediately sees that the physical location of the server is a moot point. Human logic connects two dots the AI knows but kept separate:
    1. A DNS server doesn't need to reach the targeted web server to provide an IP; it just checks its own records (cache).
    2. Google moved its services out of China a long time ago.

The AI "knows" how DNS works and it "knows" Google’s history in China, but it failed to link those two facts to the 0.1ms data point. It got stuck on the "physics" of traveling to China instead of realizing the Pi-hole was simply looking at its own local notepad. The AI has the repertoire, but it doesn't always "check its own work" across different categories of knowledge to see the most elegant explanation.


Why I’m Not Worried: Using AI hasn't made me feel obsolete; it’s actually made me more proud of my own solutions. AI gives us the material, but humans still provide the architecture. It’s a great tool for a "Slow Day," but it still needs a human at the wheel to find the elegant path.