top of page

Overcoming the Challenges of AI-Generated Code

  • admin10454
  • May 29
  • 7 min read

By Andrew Park | 2025-05-29



AI can rapidly generate tons of code—but is that code technical debt or technical wealth?


Technical debt from poor decisions increases the time needed to understand, debug, rework, and enhance code. That debt liability can come back to bite you with increased maintenance costs. In contrast, technical wealth sets you up with an asset of code you can carry forward into future development efforts.


While AI can accelerate product development, the code it generates is riddled with technical debt, making it unsuitable for sustained code maintenance. Code maintenance is still a human-dominated endeavor requiring judgment, foresight, business understanding, and the ability to comprehend the inter-connections of a large codebase.


What kinds of technical debt commonly exist in AI-generated code, and how can you transform that technical debt into technical wealth?


There are numerous problems with AI-generated technical debt that are important to understand and overcome:


1. Lack of Human Codebase Understanding

AI can’t magically transfer knowledge of a codebase into someone’s head.


When engineers read, write, or debug code, they’re building a mental model of how it works. This extensive codebase knowledge becomes an important asset over time—enabling safe and rapid product enhancements.


When an AI-generated codebase is dumped on an engineer, they have none of that understanding. It’s like coming into a fresh codebase they’ve never seen before—they need to spend time ramping up, learning both high-level architecture and low-level implementation details, to take the codebase forward.


While this ramp-up time cannot be eliminated, AI can shorten this process—both when initially prompting for AI-generated code and when trying to learn an AI-generated codebase after-the-fact. Ask AI to generate well-documented code with good coding style upfront, and use AI to ask questions to gain insights on unfamiliar code.


2. Poor Architecture & Design

AI lacks the craftsmanship and fine-tuned judgment that senior engineers can bring to the table. This results in code architecture that is overly simplistic, narrowly focusing on delivering functionality without considering bigger-picture issues. That in-turn leads to falling short on quality attributes that make a product really stand out.


Software architecture must also be simple enough to be understood by human minds. With the myopic, narrow bias of AI, it cannot take in the full scope of a large software product needed to design an architecture that is simple while meeting all product needs. The “simplistic” approach of AI will fail to conquer the complexity of the problem and deliver a suitably “simple” architecture for future human maintenance. With limited context windows, this problem only becomes greater the larger a codebase grows.


AI can suggest improvements to code for certain quality attributes like performance but only at localized scales. AI-generated code, particularly over multiple prompt iterations, can suffer from code bloat, where redundant calls can cause performance problems. A skilled engineer can understand complex data flows and improve performance over AI-generated code, transforming something that takes seconds to being near instantaneous.


Code bloat can affect more than quality attributes like performance. With AI’s bias toward generating code over fully understanding a codebase, it will often generate duplicate code close to where it’s needed without considering the larger context of a project—is there already existing code I can re-use, or where would be the best place to put new code? Discernment by a skilled engineer is important for making decisions on these issues.


For example, in the following AI-generated React code, there’s a slider coupled to a histogram:



Such couplings of sliders with histograms might occur many times in a UI, but AI will fail to recognize the conceptual coupling involved and keep generating repeated instances of code following the pattern above for different data sets. Human insight is needed to recognize that code like the above can be factored into a single UI component resulting in usage like the following:



Prompting AI early on with architectural guidance can minimize the amount of architectural technical debt that needs to be reworked later. For what can’t be tackled early on, AI can assist with larger-scale refactorings to help bring a codebase into a desired shape.


3. Lack of Product Knowledge

Product knowledge across an entire team is important for maintenance and evolution. Product managers emphasize understanding the “what” and “why” of the broader context:


  1. “What”: The product’s goals, features, and what the customer or user truly needs. It’s about clarity on the problem being solved and the desired outcome.

  2. “Why”: The reasoning behind product decisions—why a feature or approach is necessary (from a business and user perspective). This alignment helps ensure engineering work is targeted and strategic.


Many engineers only focus on the “how”—the technical execution of a solution—which can lead to inefficiency, wasted effort, or missed opportunities for impact. AI exacerbates this problem—focusing even more on the “how” and missing much of the broader context human engineers might have.


By inviting engineers into the full product lifecycle, from vision to release, they become empowered with more product knowledge to embed into the code itself. AI can be prompted to incorporate some product knowledge from documents like PRDs or chats into the code it generates, but there is a lot it will miss, particularly the “why” behind product decisions.


Infusing product knowledge into the codebase itself not only helps existing team members work more efficiently but also speeds up on-boarding new engineers.


4. Lack of Good Documentation

AI can be prompted to add documentation during initial code generation or after-the-fact. It can do particularly well at generating function header and API documentation that can be processed by common documentation generators. Using AI in this way can be a good accelerant for getting a minimum standard of documentation in place for an otherwise undocumented codebase.


However, as good as might be, it falls short of the craftsmanship needed for an easily maintainable codebase. Comments generated by AI tend to focus on the “how” of the technical solution, or at best the “what” that the code is doing. They end up missing all the good product-level knowledge.


Even at a technical level, AI will miss much of the “intent” or “why” some code is written a certain way that can be crucially illuminating for maintenance programmers.


AI-generated documentation isn’t enough, and learning how to transform that into high-quality documentation is key for a successful codebase. Whether you’re writing high-level product or lower-level technical documentation, mastering techniques for crafting great documentation can help make comments more efficient while incorporating important details AI will miss.


5. Bias Toward Code Previously Trained-On

AI is biased toward generating code similar to what it was trained on. The valuable part of that is you get to benefit from accumulated knowledge of how code has been written to solve problems in the past.


The downside is there is a lot of poorly written code that is riddled with technical debt and difficult to understand. And if you have coding standards that mandate different styles from that large body of code, you can end up with a lot of AI-generated code that is hard to read due to conflicts with internal coding standards.


Another side-effect of relying on previously-trained code is that a lot of code out there includes security vulnerabilities. By relying just on previous code without proper security knowledge, AI can easily copy security vulnerabilities into new software. Human oversight and code review is important, which means knowledge to assess and review AI-generated code remains important.


AI tends to do better at generating good variable names compared to the worst codebases, but it still has limits, particularly in more nuanced cases with many similar variables, and it often leaves “magic numbers” throughout codebases. Learning to master good variable naming techniques can improve maintainability of code beyond what AI can do.


While training data and context window limitations cannot be fully overcome, when using AI coding tools, you want to set up code for success as early as possible. That means defining coding standard prompts for your most important coding standards. Seed examples of “good code” in your codebase early on for AI to examine and leverage—this is especially valuable for AI tab completion.


6. Poor Understanding of Debugging

AI lacks a good understanding of efficient debugging principles. Given an error message, AI can be fairly good at determining solutions, at least for common problems.


However, AI-generated code is often written in a way that doesn’t support efficient debugging, with complex expressions combined in a single line that hinder debugger usage.


AI struggles at debugging complex problems that require “seeing” the real data and understanding how it flows through a system. AI will often “guess” at solutions to a problem before confirming it, resulting in extra entropy being added to a codebase and wasted time. Even when AI might eventually reason to add more debug logging, it will struggle to add debug logging at effective places with the right visibility—often missing important logging points or adding extraneous logging that just creates clutter.


Understanding a system’s architecture, how data flows through it, and effective debugging principles are essential for software maintenance and can make an engineer a more effective contributor.


7. Inability to Handle Innovative Data Transformations

AI can handle data transformations…at least if they’re simple or something it has seen before. More complex data transformations can also sometimes be handled, if AI is given enough sample data and transformation steps are sufficiently detailed beforehand.


However, for truly complex, innovative transformations, AI can struggle to handle all the nuances in real-world data. Where complex conditional logic exists or important considerations aren’t in sample data, AI can fall short of properly handling data as needed for robust software.


This is where deep human understanding of the data and its problem domain are important. When human engineers have a deep understanding of the data, they can document data formats and considerations more effectively for use in AI prompts. And when AI falls short, they are equipped to fill in the gaps to implement innovative software features.


Conclusion

AI can set you up with code but not the technical wealth needed for long-term success. To eliminate technical debt like that identified above, engineers need to grow their software craftsmanship talent and master applying those skills in an AI-accelerated development environment.

Recent Posts

See All
bottom of page