How intelligent is artificial intelligence?
"AI will write 90% of code" – really?
Anthropic's CEO Dario Amodei predicted it, the benchmarks promise it, the tech industry believes it. Only problem: Reality looks different. Chris Wolf explains why AI isn't intelligence, where it actually helps, and why junior developers are threatened – while seniors stay. An honest look behind the hype: What AI can do, what it can't, and why we need to stop confusing probability calculations with thinking.
The image was created with ChatGPT.
Etheldreda: Chris, we're currently witnessing an unprecedented wave of layoffs in software engineering. Many professionals are facing an uncertain future. You're both a software engineer and CEO. How do you assess the current threat posed by AI to human developers? Do you believe AI will replace software engineers in the long run?
Chris: No, I don't think so. A widely cited statement came from Anthropic's CEO at a Council on Foreign Relations event in March 2025. He said that within three to six months, AI would write 90% of code, and within a year, practically all of it. Nine months later, we're not seeing that happen.
It's similar with many benchmarks claiming these models are getting smarter and smarter. OpenAI said GPT-4 reached the 90th percentile on the bar exam. MIT researchers challenged this because the comparison was skewed and heavily relied on repeat test-takers who had previously failed.
So you have to ask yourself: Why can AI solve these exam questions at all? Simply because the questions and answers were in the training material. And that's exactly my core argument: We must not forget how AI models are trained and how this so-called "intelligence" emerges.
To put this in context, it's worth looking at the human infrastructure behind AI. Behind it are thousands of so-called click workers, often referred to as "Mechanical Turks." They categorize images, texts, and videos for these models, for example through platforms like Amazon Mechanical Turk. Part of their work involves tagging images: cat, dog, house, house with burning roof.
They're paid per click, often just a few cents. Under conditions that wouldn't be acceptable in countries with strong labor protections like Germany. In countries like Indonesia or Madagascar, these people work eight to ten hours daily and sometimes view disturbing content for hours to train content filtering systems for platforms like Facebook.
This raises a fundamental question: Is AI truly intelligent, or is it just repackaging and reselling human labor? Why does AI seem to "know everything"? Because it's consumed massive amounts of copyrighted content – essentially every digitized book. The authors who invested time in research and writing receive no compensation while others profit from this intellectual property.
Back to software engineering. AI has clear advantages. It brings genuine productivity gains. Tasks that used to require an hour or two of coding with trial and error can now be done in 15 minutes of problem analysis and prompt writing. An AI agent like Claude can generate ten to fifteen code files in a minute. That's impressive.
My role shifts in the process. I check whether the results meet specifications instead of writing every line myself. Overall, this process is significantly faster than doing everything manually.
But there's a crucial difference. As an experienced developer, I consciously choose speed over practice. In our company, especially in senior positions, a high level of mastery and continuous improvement is expected to accelerate daily workflows. As a senior software engineer, I've learned my craft and constantly refine it.
For junior developers, relying exclusively on AI is toxic. They need those 10,000 hours to achieve true mastery. Only at the senior level does using AI become strategic. Then I ask myself: Do I need to solve this problem manually again, or does speed help me focus on more complex and meaningful challenges? Only experienced developers can make this decision sensibly.
AI excels primarily at low-priority tasks where typing speed is the bottleneck. What costs me one to two hours of brainstorming, design, and typing, an AI agent does instantly, creating ten files with 1,000 lines of code each. For this "grunt work" – typing, executing, testing – AI is extremely valuable. It takes away monotonous but necessary work and creates space for higher-value tasks.
This allows me to focus on critical architectural decisions. Database design, for example, I rarely delegate to AI. When we design the data structure for a new project, senior developers sit together and develop it ourselves. Once the table model is documented, we hand it to AI to create an Entity-Relationship Diagram. This is exactly where AI is strong: It transforms our specifications into visual documentation that's versionable, shareable, and permanently usable. We used to sketch this on paper, which quickly became outdated and ended up in the trash.
Interestingly, AI promotes better documentation. To work effectively with AI, you must think through problems, verbalize them, and write them down. This gives AI the structured input it needs, but simultaneously creates valuable documentation for the team.
Etheldreda: You emphasize the importance of documentation. Isn't there a fundamental limitation here? LLMs are entirely dependent on the quality of input data. If they can't think independently or question information, aren't they less intelligent than often claimed?
Chris: Yes, absolutely. Can AI think? No. It has no real reasoning and no intrinsic motivation. The term "Artificial Intelligence" is the core problem in public debate. Anyone in the software industry who seriously engages with the topic quickly understands what AI actually is and how it generates results.
Among software engineers, there's broad consensus that this technology is mislabeled. It's not intelligent and doesn't think. Terms like "Thinking Models" or "Reasoning Models" are marketing. Nevertheless, these systems deliver interesting results based on their training.
Recently there was a demo where ChatGPT had to solve a very complex logic puzzle. Not a simple sudoku, but a puzzle with nested data patterns, false clues, and targeted distractions. Such puzzles often keep people busy for hours. The puzzle was scanned and entered as a prompt. ChatGPT solved it in 45 minutes. That's impressive, no question. But it's not human-like intelligence and not AGI. It's a showcase of capabilities, nothing more.
Etheldreda: So can AI really work autonomously without human control, or does it always need human oversight?
Chris: No. AI cannot work autonomously without human oversight. As I said, it lacks any intrinsic motivation. Of course I think about how I can better utilize my code subscriptions. But I can't let AI work unsupervised. Everything it produces must be reviewed.
It doesn't take over 100% of my work. In some cases, it generates faulty or bad code that I have to completely rewrite. Most of the time it saves time, but occasionally it produces nonsense that requires complete rework.
Etheldreda: I recently read a comment on Reddit: "My entire workflow has changed from 'fixing junior code' to 'fixing AI code.'" Does that match your experience?
Chris: Yes, that's widespread. Overall, I still get more done more easily. And as I said: Basic tasks or "grunt work," as the inventor of Tailwind CSS calls it, I gladly hand over to AI.
Routine tasks can be automated or templated. But with AI it's more flexible. If the project is cleanly designed, with component libraries and clear structures, you can formulate precise prompts. The LLM then acts like a junior engineer and delivers usable results.
The further you get into senior level, the more the models hit their limits. There are complex, architecture-relevant decisions and logic where they fail. One of our senior developers works on a very complex backend in an object-oriented structure. When he uses Claude, he's almost always disappointed. The generated code gets thrown away because it would degrade backend quality.
It also depends heavily on the use case and programming language. LLMs have a clear bias toward JavaScript because it's the most widespread language with the most training data. Additionally, the training data is usually a mix of very current and several-years-old information. That's why you have to explicitly provide current code examples in the prompt to get modern implementations.
Etheldreda: How do you specifically ensure that AI generates code for current frameworks and versions?
Chris: The key is in the context. You have to provide current examples. When I work with React, I have to explicitly state that we're using version 19. Otherwise the model falls back on old patterns.
This gets to the core: LLMs work with probabilities. To "Hello, how are you?" the most probable response is "I'm fine." Nothing more happens.
If you ask about the President of the USA, you might get an outdated answer because certain names appeared more frequently in training. The model has no sense of time. To correct this, providers override such facts via system prompts. That's not reasoning, it's hardcoding by engineers.
There's no real thinking. Just probability calculations optimized for user satisfaction and revenue. Language follows patterns that can be calculated. That's why AI "understands" languages it was trained on. If you ask about very local languages of small ethnic groups, it hallucinates or declares them invented.
Etheldreda: Finally, two points. First: What advice do you give junior developers who are very worried about their careers? Second: Neil Patel says AI platforms like Google Gemini are becoming the new marketplace for products and services. Doesn't this lead to massive distortions in favor of strong brands with good SEO?
Chris: In e-commerce, we're seeing AI shorten the buyer journey. The consulting portion of sales is increasingly taken over by AI. Previously, buyers researched on many websites, watched videos, and compared providers. That was very individual.
Today, AI delivers tailored answers to very specific questions. Whether these are always correct is questionable. But they feel personal. This compresses the buying process significantly. AI recommends products, providers, and even integrates purchase buttons. Platforms like Etsy are already implementing this. ChatGPT is planning its own marketplace.
We're experiencing ourselves that people talk with Gemini or ChatGPT about business ideas, costs, and implementation, and then contact us because we were mentioned in these conversations. This is already happening. This is called Generative Engine Optimization (GEO) as a replacement for traditional SEO. For marketers, this is relevant because we still need to reach clients, consult them, and convince them.
Etheldreda: And who actually provides the information that AI reuses?
Chris: That's exactly the core issue. We invest heavily in content marketing and produce well-founded, honest content. AI uses this content and sometimes recommends competitors. That's a real dilemma. Either you opt out and become invisible, or you continue to demonstrate expertise even as AI exploits it. This can't be reversed.
Etheldreda: Back to the juniors. What do you specifically advise them?
Chris: Junior developers are definitely threatened. They're the easiest to replace. In our work with mid-level freelancers, we see this very clearly. We train them project-specifically, they build expertise and then move on. Why should you hire juniors when you invest and then lose them afterward? If we have the capacity, it's better for us to do the task ourselves with AI support.
Low-level tasks are already being replaced by AI. That's why there are fewer junior positions. Companies believe they can save costs. Short-term that might work, but long-term it's not sustainable.
AI is based on yesterday's data. To have seniors tomorrow, we need to train juniors. Every senior was once a junior. This pattern isn't new. With every technological upheaval, simple tasks disappear first. Those who master their craft and deliver real value remain relevant. Those who can't will be replaced by machines. That's always been the case.