When code is considered technical debt

I have spent a while considering what technical debt is and concluded there is no good definition that is particularly useful to estimate the problem. Instead it is more practical to understand when code should be considered technical debt as this is more useful. Many people don’t know when to tackle technical debt and often choose to tackle it at the wrong time.

It’s common to hear people talking about technical debt as either code which is hard to maintain or code which does not conform to their idea of good code. Fashionable best practices often influence the latter. I will use these two ideas to loosely define technical debt. I believe there isn’t a definition that can quantify debt absolutely, so I’ll use a reasonably simple definition that mirrors how developers think about it in the real world.

At True Clarity we have several separate teams all working on enterprise web applications using fashionable best practices. Each team has roughly the same distribution of skills and experience and equally challenging work. I have reviewing each team’s code many times and each team does have markedly different levels of technical debt.

The interesting thing is that all teams have a reasonably close level of productivity.

I have made similar observations throughout my career. The complexity of the code is often not a contributing factor to productivity. My hypothesis is that familiarity of the code you are working on negates technical debt being a problem. Put another way: the impact of technical debt is inversely related to the familiarity of the debt.

This leads to some interesting conclusion:

  • If you have never seen a particular code base before you are more likely to be unproductive and you are more likely to feel like there is technical debt.
  • If a code base does not use the coding practices you believe are “best”, it is less likely to seem familiar and therefore you will think it is technical debt.

This highlights the high degree of coupling between productivity and our natural familiarity with something. This statement is obvious in itself, but with code, people seem to forget it.

Another observation at True Clarity is that a team tends to be at its least productivity when we put a brand new person on that team. How quickly that person gets up to speed depends on how easy the code is to read and make changes to (factors like how much time colleagues spend helping the person tend to be equal among the teams).

This means we pay the greatest interest on technical debt when people are new to the code they are working on. This is a pretty consistent observation, and I haven’t seen contrary behaviour (taking motivation out of the equation).

It turns out it is not always observed that you pay interest on technical debt when working on complicated code. It can be quicker to make changes to complicated code if you know what it is doing (because it may support what you want to do quite easily if you have the know how).

Ultimately, you have to decide how often people will be working on unfamiliar code to get a good measure of the true size of technical debt.

The tipping point for tackling technical debt

This actually means there is a tipping point for deciding when, if at all, to tackle technical debt. Let’s say you seldom work on an area of code, but when you do it is very time consuming. Is the cost of rewriting that code greater than the time spent updating it when you need to? That is a question an architect should be having with the business.

So far so bleak, but we can also draw some very positive observations.

If a team spends extra effort making highly readable and maintainable code it often gets a productivity boost that offsets this extra effort. At True Clarity we have never seen a team go much beyond break-even in terms of payback on productivity, however, the ability for new people to get up to speed is very rapid so we have seen good payback in this respect.

To start to tackle technical debt, evaluating the tipping point (when it becomes valuable to the business to rework “bad” code) is the key. By conceptually dividing your code base into the small modular areas you can determine the tipping point of each area. If you are a developer or architect that genuinely understands what is important to the business you may have autonomy to choose when to rewrite code. In the case where you have autonomy you can make those decisions at a very granular level (on a file or method/function level for instance).

Out of interest there is an alternative solution to technical debt: refine your project induction process. By this I mean the human element of induction like coaching and mentoring (i.e. get better at explaining your own software by explaining often and refining your own understanding). Something that documentation can’t replace is learning to communicate your domain expertise to different audiences (after all not all developers need the same information or process information in the same way). I will discuss documentation in another post, the short story is there is a tipping point where documentation becomes debt so it needs care. After all, once a person is familiar with code the interest on technical debt diminishes. This may be a solution if your team is volatile and you don’t know where to start on your debt problem.

Take care rewriting technical debt

I will leave you with one caveat to rewriting code that is another consequence of the fact familiarity reduces debt. Often a developer will rewrite something that makes more sense to them, but another developer will look at the old version and the new version and think there is no improvement. The process of rewriting gives you instant familiarity, therefore leading you into a false sense of “solving” the debt. Unfortunately you have only solved it for yourself, you have not helped the business one bit. Another instance of this is rewriting something using a fashionable new technology where people will immediately feel more at home with the code. This is legitimate as it will help people gain quicker familiarity. The true goal, however, is to make the code really readable without relying on fashionable coding tools/techniques. This involves a real understanding of the cognitive friction involved in a person reading code and extrapolating the business requirements from it. Ultimately, you should be concerned with how easily someone can relate your abstract model to the real world. I have found this last question leads to a satisfactory definition of debt as a cognitive relationship rather than a technical standpoint.


A Tale of Two Architects

Ever got caught up in a debate about up front architecture versus evolutionary architecture? I know I have, but which is right? I believe it all depends on the business context and the information you have available.

I my mind both approaches are valid, just generally not on the same set of problems. Deciding which architectural approach to take should be considered seriously, and if you’re not having a long hard think about which approach to take, you probably should be.

This post hopes to equip you with some basic ideas to have a meaningful conversation about how you should approach your project from an architectural point of view: whether it’s the design it before you code it approach, or quickly start on the implementation and use learning to feedback on the direction you should be heading.

I’m going to split the approaches to architecture into the following two broad types:

Architecture Approach A

The first approach to architecture involves a lot of planning and discussion and a general effort to deeply understand the problem and map out the system up front. Typically there will be a person or delegation of people responsible for the architecture as a primary role. A separate team responsible for software implementation will build against the architectural vision. It will generally involve lot of meetings and documentation.

Architecture Approach B

The second approach is to transfer the responsibility of architecture to the implementation team and allow them to grow the architecture organically based on the continued learning and insight gained during the build and feedback from stakeholders as they see the software evolve.

The Winning Architectural Approach

I believe both approaches will get there in the end with a determined team (so by simply delivering a project using an approach, you don’t necessarily have proof about whether the choice was right, only that you have a good team), however, I think there are clear cases when one approach is better than the other.

The Architect Who Faces Certainty

An architect that faces certainty usually works in a business whose landscape is unlikely to change and with users and customers that have clear expectations and have requirements that are highly predictable. This is a clear choice for Architectural Approach A – the Certain Approach.

The Architect Who Faces Uncertainty

An architect that faces uncertainty will work in a business environment with a volatile market and whimsical users/customers whose needs are unpredictable. This is a clear choice for Architectural Approach B – the Uncertain Approach.

The Future of Architecture

Choosing the right architectural approach really depends on how much information you have about the future. If you have certainty, making all your decisions up front will get your project where it needs to be efficiently as everything can be mapped out. If you do not have certainty, you need an evolutionary approach to architecture based on gathering the information you need for the next set of decisions. Typically information is gathered by implementing part of your architecture and extracting insight from real results.

If the time taken to plan is spiralling out of control and conversations involve a lot of “what if this happened in the future” and you are hedging your design against uncertainty you are probably trying to up front plan against uncertainty – a sure sign you need to rethink your approach.

Another way of looking at the two approaches is from a scientific point of view. When trying to model something we understand it’s easy to come up with sound model. When modelling something we don’t understand, we must experiment in order to piece the model together.

As computing power continues to increase, and software tools become more powerful, it’s becoming harder and harder to model the systems that our tools are capable of building and predict how users/customers will use them. Companies should be comfortable with both architectural approaches to ensure the best chance of success developing software.

Designing software at the start of a project in a certain world is well understood, and I don’t think there is much more to teach anyone about how to do it well. Evolutionary architecture is a seldom discussed topic in software literature and their are similar sounding labels for other ideas. To feel your way through a project implementing the right parts at the right time to get the right feedback is tough and ultimately an art form with much scope for new thought but already used with great success. I believe it is vital to invest time in exploring the benefits that evolutionary architecture can bring to delivering large scale customer facing web platforms that inherently have many unknowns without being paralysed by the fear of implementation without certainty.

Whatever you decide, uncertainty will be your guide 🙂

Pairing: how to increase quality and how to halve learning

I have mixed views on pair programming. While I think it’s absolutely great for increasing code quality (used with TDD even more so), I also think it’s a lazy way to share knowledge and can ultimately halve your organisation’s ability to improve its knowledge.

Halving Learning

Pairing is not a good way to transfer knowledge and will not increase the speed at which you learn. Put another way, pair programming is an absolutely terrible way to transfer knowledge and a guaranteed way to halve your organisation’s capacity to learn.

I’m huge on learning. I believe that at least 80% of software development is about learning. Great programmers are people that can figure stuff out. Average programmers can’t. Figuring stuff out requires practice. You have to exercise discipline, tenacity, time management, a good balance of creativity and logic, a meticulous eye for detail, the ability for lateral thinking, a thirst for knowledge, and a ruthless drive to improve.

The first time I tackle a problem most of the time is spent learning. For a typical day of programming, if it was a non-routine problem (of which most of ours are non-routine) it can usually be rewritten in an hour or less once you know the solution. Have you ever seen a developer that types constantly? No. Developers are mostly thinking or researching in order to work through their current problem. At most 20% is probably spent coding up their solution. If this isn’t the case, it probably means you’re doing something pretty routine that isn’t problem solving or you’re trying to program by coincidence.

When I’m in problem solving mode, I would probably learn more slowly if I had to pair program. I learn fastest without distraction where I can work at my own pace. I need focus to think. I don’t get distracted by having to explain the things I already know won’t work and I can try out things without debate. This way I can learn first hand what works and what doesn’t gaining valuable first hand experience.

Once I’ve figured out what works, it will take a fraction of the time to summaries to a colleague what worked, what didn’t work, what I left untried, and what knowledge I was building upon. Developers can exchange this information quickly. The closest I would suggest people get to pair programming when in problem solving mode is to independently trying to solve the problem and compare their findings at the end. This has the benefit of avoiding either person getting polarized by the other’s idea during the creative problem solving process.

You should recognize when you’re in quality mode and when your in learning mode and switch up your style of programming appropriately. This may be many times a day.

Learning time will usually out weigh your coding by 5 times. In order to facilitate the knowledge hand over you can capture some types of learning as tests, other types of learning as notes or diagrams, and other types of learning as clear code that is self explanatory.

First hand learning is the quickest way to deeper understand, increased self confidence to tackle more complex problems independently, and increased mastery of software development in general.

Work, Rework and Maintenance in Software

I thought it would be worth clarifying the terminology used to express the different types of work, rework and maintenance in software development.


Refactoring is the art of making your code clean and refining the design in small incremental steps towards the SOLID principles of software development. Refactoring should be performed many times a day to ensure code is clean.


In order to learn enough about a problem, a prototype is often built that will be thrown away. If the prototype is not thrown away, it is not a prototype. Prototyping should be used to gain knowledge and understanding of a problem. A prototype will generally have a substandard design because the problem is not well understood until the prototype is finished. That’s why you throw a prototype away and redesign what you have just done with increased knowledge. Don’t continue working on a prototype when there is no more learning taking place. Do not mix user interface prototyping with software design prototyping. Always keep the two separate.

Tracer Bullet

A tracer bullet is an implementation of part of a piece of software. A tracer bullet is used to learn enough about the larger piece of software, usually in order to inform estimates and tackle technical risks early. Tracer bullets are risk management for developers. A tracer bullet is only finished if the code is production ready, tested and has been kept clean by constantly refactoring the code during development otherwise it is a prototype and should be thrown away.


Redesign is usually undertaken when existing code needs to be drastically remodeled. Redesign is an indication of one or more of the following problems:

  • the code is not clean and does not following the SOLID principles
  • there was inadequate knowledge during development
  • the project was rushed
  • technical risks were not properly flushed out

Redesign is caused by a lack of refactoring, prototyping and tracer bullets.


Every project should have a healthy mix of prototyping to inform design, tracer bullets to inform estimates and reduce risks and refactoring that should be done regularly throughout the day. Redesign implies that you’ve got an unhealthy mix or are ignoring important steps in the software development cycle.

Also, changes to requirements would potentially precipitate one or more of refactoring, prototyping, tracer bullets, or even redesign. The later suggesting quite a major change.

Catch 22: Building Great Software

Building great software is hard. It starts with the choices you make and making great choices is really hard. After all, given the choice, we’d all choose to make great software right? So what’s the catch?

Picture a young guy out to buy a car. He really wants a convertible, it really fits with what he wants out of life right now. So he buys a convertible. He meets a girl, settles down, and before you know it, a baby is on the way. Now the convertible isn’t so great. What helped him when he was dating isn’t much help to him now. Now he wants a saloon!

Our current choices often conflict with future circumstances, however, if we focus too heavily on the future, we may actually make decisions which conflict with the present. A decision which is great for the future might be terrible news right now and vice versa. So how do we deal with this conflict? As it turns out, we tend to deal with it rather badly.

Decisions go stale. As business needs change, what’s relevant today is no longer useful in the future. In software development this means each feature will at some point fail to be relevant. Developers can see this as a failure to make the right choice rather than accept that ultimately our decisions will at some point be irrelevant. It’s likely that given such failure there will be the belief that enough learning took place to make a better decision in the future. We completely ignore the fact that at some point those choices will also become stale. Rather than turn our attention to being good at handling change, we repeat the cycle of trying to improve our decision making process on the basis that we’ve learnt enough that it won’t happen again. We naively think that next time our decisions will stand the test of time!

In extreme cases decisions can go stale before we’ve even acted on them. Businesses don’t wait for you to finish, they continue to change and learn. Being mechanically minded, let’s say we decide to build a saloon. This will take some time. While we’re doing this our child could grow up, leave home and we’re left wishing we’d built a convertible.

The conflict between present and future choices creates a problem in defining a successful software project. Success or failure at this level is outside our control: we can’t predict the future so we don’t know how relevant our choices are going to be over time.

In order to handle this conflict, we have to make it a non-issue. We have to be successful at adapting to change. What really matters to our success or failure is our ability or inability to respond quickly to change. When we can’t adapt quickly to changes, such as new or revised requirements, we have ultimately failed to manage the conflict. The good news is, if we understand that we have failed to respond quickly, we can invest time in understanding why that is and learn from this failure in order to improve our software development process to better handle change.

For a software team to be successful at adapting to change, you need to understand that you’re biggest asset or liability is your code. No decision you make can ensure your code outlives its usefulness, all you can do is ensure you can change it quickly, easily and without risk. To do this you will need clean, loosely coupled, well structured, unit tested code.

Similarly, business stakeholders need to help break the mind-set that features are perpetual. Software is evolutionary and features carry an amount of speculation about current and future value. Rather than telling developers what the business needs (or developers interpreting needs as perpetual), a “let’s try this next” approach to implementing features helps to reinforce the understanding that change comes with the territory.

Focus on handling change well. Your process should be about improving the ability to handle change by understanding and identifying your failure to respond quickly to change. Loose the ego and stop being frightened to admit failure and maximise the opportunity to learn as much as you can for the next change – it’s definitely coming!

Context Switching

The dreaded context switch. I hear about it all the time. I have to drop what I’m doing several times a day to listen to people discuss it (chuckle). Jokes aside, I’m genuinely interested in the reasons why people want to avoid switching context.

Up until very recently I was convinced context switching was a bad idea. I would also have assumed any books that deal with context switching would justify my belief Q.E.D. style, case closed. I’ve been wondering lately if my thinking is broken and that I’ve been fighting for the wrong cause.

So what is context switching? In my world we are talking about changing from one deep train of thought to another. As a developer it takes me a while to get my head round the code I’m looking at. If I have to break my concentration, clear my registers and push what I was thinking about onto my mental stack for later, it takes times. A familiar example would be working on the code for a project and an urgent problem comes in that requires my attention to be diverted to the codebase for another project. Everything I was holding in my head about the code I’m currently working on gets lost forever as I bring my thoughts to bear on a different problem.

Now I’m sure you might be thinking there are also sorts of time management practices you could apply to handle this. However, I would urge you to think twice about how you tackle context switching. Is it really a good idea to rush into this problem with your productivity hat on and try to optimize the process to avoid context switching? If you do, much simpler and more important questions may go unanswered: are people finding context switching hard? If they are: why is that?

Recently I read two very different books. Although they turned out not to be so different after I’d read them. One was on psychology, the other development.

The book on psychology “Think Fast, Think Slow” discusses how we handle the process of decision making and problem solving. It will come as no surprise that we need to engage a large percentage of our brain to solve problems. The harder the problem the more time and thought you will need to understand the problem and resolve it. My take away is that there are types of work that we struggle with. We’ve all put a book down because it was just too damn complicated. We’ve all read a book where each page was jammed so full of information it took ages to read. I’ve also read similar sized books in a few hours. Those books I’ve read quickly were just… well, less complicated!

The book on software development was called “Clean Coder”. It challenged some of my long held beliefs about good software development. It especially challenged how I think about being in the zone. I love being in the zone. I love being so engrossed in difficult problems that hours seem like minutes. I love holding hundreds of lines of code in my head trying to figure out where it’s going wrong. I’ve woken in the morning knowing how to fix a bug in some thousands lines of assembler I’ve memorized. It’s addictive and gratifying. There can be a certain amount of bravados about being better at handle complicated problems than others. Is boasting about fixing really hard code like totally unreadable assembler a good thing? Is being so engrossed in a problem that you lose sight of the big picture the best approach?

Both these books were telling me something about complicated work. It can be difficult to get your head round. We don’t want to be disturbed when we’re engrossed in complicated work because it’s difficult to get engrossed. If you like a challenge, it can be a lot of fun. We don’t want to be disturbed when we’re engrossed in complicated work because we’re enjoying ourselves. Whether context switching is time consuming or a killjoy, the fact is: difficult challenging work is time consuming.

If context switching is a problem, you need to understand if the work is the underlying cause.

Maybe you’re thinking: “Duh? Of course software development is hard. We need to focus more and have less distractions to deal with it!”

Maybe you’re thinking: “I kind of get it, but we can’t make the work less complicated, we have no control over the work coming in!”

Maybe you’re thinking: “How can I make the code I have to read all day simpler and clearer?” If you’re thinking about the possibility of making things simpler, you understand.

Book Review: The Clean Coder, by Bob Martin

Bob Martin is well known for his books on software development and design, however, he offers something more akin to ethics than engineering in The Clean Coder. Looking beyond the art of software development as a technical subject matter, Bob Martin expresses the attitude of a good software professional. Indeed the tagline for the book is: a code of conduct for professional programmers. Behind this rather dry title are memorable anecdotes and inspiring messages on how to be behave with courage, dignity and credibility within a wholly unregulated and under defined industry.

This book is a must read for anyone, not just developers, involved in software from the people on the ground to directors. It applies to all roles within the organisation. It’s just as relevant to people marketing software and managing software projects as it is to developers. Not only is the message universal in its guidance, but it also creates empathy between disciplines by considering how they can collaborate and communicate professionally.

If you’ve ever felt disenfranchised as a “techy” from your organisation, the advice in this book will give you a whole new voice by explaining the refining qualities that earn trust and respect. You will be rewarded with a greater sense of purpose within your organisation and a greater sense of belonging to a community of respected professionals.

The first chapter, “Professionalism”, sets out to broadly define the characteristics of a respected professional.

The second and third chapters, “Saying No” and “Saying Yes”, elaborate on the importance of clarity, honesty and courage.

Chapters four and five, “Coding” and “Test driven Development”, have been squarely pitched at developers, however they do offer good insight into the mind-set of a programmer. For instance, a section I found profoundly interesting was Bob Martin’s view and exploration of being in “the zone”. He proposes that this is in fact an undesirable state to enter because you start to operate with your blinkers on. Even though you may feel more productive, in the zone you may miss the big picture. If you aren’t a coder, it’s still worth persevering through these chapters otherwise you will miss some valuable insights.

Chapter six “Practicing” discusses the role of mentoring and the importance of refining your technique as a craftsman. I found this to be more important than a simple message about learning new things as it unravels the subtle difference between a master reinforcing what they know works and the more powerful journey-based approach of an apprentice discovering what works under guidance.

Chapters seven and eight, “Acceptance Testing” and “Testing Strategies”, examine how to ensure requirements are met and who should be involved. Although the focus may appear to be around testing, requirements are a central theme and a collaborative approach across disciplines is well emphasised. I liked the terms “happy path” and “unhappy path” used in the book as they immediately frame two different thought processes to assist in understanding the depth of a feature or requirement.

Chapters nine, ten and eleven, “Time Management”, “Estimation” and “Pressure” present an important and wide ranging set of tools from the meditative to the scientific (including standard deviations) that tackle the most feared and contested subject within your organisation: how long your project will take. Even a seasoned member of the software community should find something new to take away.

In chapters twelve and thirteen, “Collaboration” and “Teams and Projects”, Bob Martin has resisted the temptation to discuss particular methodologies. Instead, he chooses to offer a few nuggets of widely applicable wisdom.

The final chapter, “Mentoring, Apprenticeship, and Craftsmanship”, is about teaching and learning. Bob Martin provides a framework for a merit based approach to progression and advancement.

In summary, I think this is an essential read for anyone currently involved or thinking of getting involved in the software industry and I admire its effort in comprehensively tackling what it means to be a professional programmer.