Introduction, Code Formatting, and Tools
In this chapter, we will explore the first concepts related to clean code, starting with what it is and what it means. The main goal of the chapter is to understand that clean code is not just a nice thing to have or a luxury in software projects. It's a necessity. Without quality code, the project will face the perils of failing due to an accumulation of technical debt (technical debt is something we'll discuss at length later in the chapter, so don't worry if you haven't heard the term before).
Along the same lines, but going into a bit more detail, are the concepts of formatting and documenting the code. These also might sound like superfluous requirements or tasks, but again, we will discover that they play a fundamental role in keeping the code base maintainable and workable.
We will analyze the importance of adopting a good coding guideline for this project. Realizing that maintaining the code aligned to a reference is a continuous task, we will see how we can get help from automated tools that will ease our work. For this reason, we'll discuss how to configure tools that will automatically run on the project as part of the build.
The goal of this chapter is to have an idea of what clean code is, why it is important, why formatting and documenting the code are crucial tasks, and how to automate this process. From this, you should acquire a mindset for quickly organizing the structure of a new project, aiming for good code quality.
After reading this chapter, you will have learned the following:
- That clean code really means something far more important than formatting
- That having standard formatting is a key component in a software project for the sake of its maintainability
- How to make the code self-documenting by using the features that Python provides
- How to configure tools to automate static verifications on the code
Introduction
We'll start first by understanding what clean code is, and why this is important for a software engineering project for it to be successful. In the first two sections, we will learn how important it is to maintain good code quality in order to work efficiently.
Then we'll discuss some exceptions to these rules: that is, situations in which it might even be cost-effective to not refactor our code to pay off all its technical debt. After all, we cannot simply expect general rules to apply everywhere, as we know there are exceptions. The important bit here is to properly understand why we would be willing to make an exception and identify these kinds of situations properly. We wouldn't want to mislead ourselves into thinking something shouldn't be improved when in fact it should.
The meaning of clean code
There is no sole or strict definition of clean code. Moreover, there is probably no way of formally measuring clean code, so you cannot run a tool on a repository that will tell you how good, bad, or maintainable that code is. Sure, you can run tools such as checkers
, linters
, static analyzers
, and so on, and those tools are of much help. They are necessary, but not sufficient. Clean code is not something a machine or script can recognize (so far) but rather something that we, as professionals, can decide.
For decades of using the term programming languages, we thought that they were meant to communicate our ideas to machines so they can run our programs. We were wrong. That's not the truth, but part of the truth. The real meaning of the "language" part of "programming languages" is to communicate our ideas to other developers.
Here is where the true nature of clean code lies. It depends on other engineers to be able to read and maintain the code. Therefore, we, as professionals, are the only ones who can judge this. Think about it; as developers, we spend much more time reading code than actually writing it. Every time we want to make a change or add a new feature, we first have to read all the surroundings of the code we have to modify or extend. The language (Python) is what we use to communicate among ourselves.
So, instead of giving you a definition (or my definition) of clean code, I invite you to go through the book, read all about idiomatic Python, see the difference between good and bad code, identify traits of good code and good architecture, and then come up with your own definition. After reading this book, you will be able to judge and analyze code for yourself, and you will have a clearer understanding of clean code. You will know what it is and what it means, regardless of any definition given to you.
The importance of having clean code
There are a huge number of reasons why clean code is important. Most of them revolve around the ideas of maintainability, reducing technical debt, working effectively with agile development, and managing a successful project.
The first idea I would like to explore is with regard to agile development and continuous delivery. If we want our project to successfully deliver features constantly at a steady and predictable pace, then having a good and maintainable code base is a must.
Imagine you are driving a car on a road towards a destination you want to reach at a certain point in time. You have to estimate your arrival time so that you can tell the person who is waiting for you. If the car works fine, and the road is flat and perfect, then I do not see why you would miss your estimation by a large margin. However, if the road is in poor condition and you have to step out to move rocks out of the way, or avoid cracks, stop to check the engine every few kilometers, then it is very unlikely that you will know for sure when you are going to arrive (or if you will arrive). I think the analogy is clear; the road is the code. If you want to move at a steady, constant, and predictable pace, the code needs to be maintainable and readable. If it is not, every time product management asks for a new feature, you will have to stop to refactor and fix the technical debt.
Technical debt refers to the concept of problems in the software as a result of a compromise or a bad decision being made. It's possible to think about technical debt in two ways. From the present to the past: what if the problems we are currently facing are the result of previously written bad code? And, from the present to the future: if we decide to take a shortcut now, instead of investing time in a proper solution, what problems are we creating for ourselves further down the line?
The word debt is a good choice. It's debt because the code will be harder to change in the future than it would be to change it now. That incurred cost is the interest of the debt. Incurring technical debt means that tomorrow, the code will be harder and more expensive to change (it would even be possible to measure this) than it is today, and even more expensive the day after, and so on.
Every time the team cannot deliver something on time and has to stop to fix and refactor the code, it is paying the price of technical debt.
One could even argue that a team that owns a code base with technical debt is not doing agile software development. Because, what's the opposite of agile? Rigid. If the code is riddled with code smells, then it can't be easily changed, so there's no way the team would be able to quickly react to changes in requirements and deliver continuously.
The worst thing about technical debt is that it represents a long-term and underlying problem. It is not something that raises an alarm. Instead, it is a silent problem, scattered across all parts of the project, that one day, at one particular time, will wake up and become a show-stopper.
In some more alarming cases, "technical debt" is even an understatement, because the problem is much worse. In the previous paragraphs, I referred to scenarios in which technical debt makes things harder for the team in the future, but what if the reality is much more dangerous? Imagine taking a shortcut that leaves the code in a fragile position (one simple example could be a mutable default argument in a function that causes a memory leak, as we'll see in later chapters). You could deploy your code and it would work fine for quite some time (for as long as that defect doesn't manifest itself). But it's actually a crash waiting to happen: one day, when least expected, a certain condition in the code will be met that will cause a runtime problem with the application, like a time-bomb inside the code that at a random time goes off.
We clearly would like to avoid scenarios like the aforementioned one. Not everything can be caught by automated tools, but whenever it's possible, it's a good investment. The rest relies on good, thorough code reviews, and good automated testing.
Software is only useful to the degree to which it can be easily changed. Think about it. We create software to respond to some needs (whether it is purchasing a plane ticket, shopping online, or listening to music, just to name a few examples). These requirements are rarely frozen, meaning the software will have to be updated as soon as something in the context that led to that software being written in the first place changes. If the code can't be changed (and we know reality does change), then it's useless. Having a clean code base is an absolute requirement for it to be modified, hence the importance of clean code.
Some exceptions
In the previous section, we explored the critical role a clean code base plays in the success of a software project. That said, remember that this is a book for practitioners, so a pragmatic reader might rightfully point out that this begs the question: "Are there legitimate exceptions to this?"
And of course, this wouldn't be a truly pragmatic book if it didn't allow the reader to challenge some of its assumptions.
Indeed, there are some cases in which you might want to think of relaxing some of the constraints of having a pristine code base. What follows is a list (by no means exhaustive) of situations that might justify skipping some of the quality checks:
- Hackathons
- If you're writing a simple script for a one-off task
- Code competitions
- When developing a proof of concept
- When developing a prototype (as long as you make sure it's indeed a prototype that will be thrown away)
- When you're working with a legacy project that will be deprecated, and it's only in maintenance mode for a fixed, short-lived period of time (and again, provided this is assured)
In these cases, common sense applies. For example, if you just arrived at a project that will be live only for the next few months until it gets decommissioned, then it's probably not worth going through all the trouble of fixing all of its inherited technical debt, and waiting for it to be archived might be a better option.
Notice how these examples all have in common that they assume the code can afford not being written under good quality standards is also code that we will never have to look at again. This is coherent with what was previously exposed and can be thought of as the counter-proposal of our original premise: that we write clean code because we want to achieve high maintainability. If there's no need to maintain that code, then we can skip the effort of maintaining high-quality standards on it.
Remember that we write clean code so we can maintain a project. That means to be able to modify that code ourselves in the future, or, if we're transitioning the ownership of that code to another team in the company, to make this transition (and the lives of the future maintainers) easier. That means, that if a project is in maintenance mode only, but it's not going to be deprecated, then it might still be a good investment to pay off its technical debt. This is because at some point (and usually when least expected), there will be a bug that will have to be fixed, and it would be beneficial for the code to be as readable as possible.
Code formatting
Is clean code only about formatting and structuring the code? The short answer is no.
There are some coding standards like PEP-8 (https://www.python.org/dev/peps/pep-0008/) that state how the code should be written and formatted. In Python, PEP-8 is the most well-known standard, and that document provides guidelines on how we should write our programs, in terms of spacing, naming convention, line length, and more.
However, clean code is something else that goes far beyond coding standards, formatting, linting tools, and other checks regarding the layout of the code. Clean code is about achieving quality software and building a system that is robust and maintainable. A piece of code or an entire software component can be 100% compliant with PEP-8 (or any other guideline) and still not satisfy these requirements.
Even though formatting is not our main goal, not paying attention to the code structure has some perils. For this reason, we will first analyze the problems with a bad code structure and how to address them. After that, we will see how to configure and use tools for Python projects to automatically check the most common problems.
To sum this up, we can say that clean code has nothing to do with things like PEP-8 or coding styles. It goes way beyond that, and it's something more meaningful to the maintainability of the code and the quality of the software. However, as we will see, formatting code correctly is important to work efficiently.
Adhering to a coding style guide on your project
A coding guideline is a bare minimum a project should have to be considered being developed under quality standards. In this section, we will explore the reasons behind this. In the following sections, we can start looking at ways to enforce this automatically by using tools.
The first thing that comes to my mind when I try to find good traits in a code layout is consistency. I would expect the code to be consistently structured so that it is easy to read and follow. If the code is not correct nor consistently structured, and everyone on the team is doing things in their own way, then we will end up with code that will require extra effort and concentration to be understood. It will be error-prone, misleading, and bugs or subtleties might slip through easily.
We want to avoid that. What we want is exactly the opposite of that—code that we can read and understand as quickly as possible at a single glance.
If all members of the development team agree on a standardized way of structuring the code, the resulting code will look much more familiar. As a result of that, you will quickly identify patterns (more about this in a second), and with these patterns in mind, it will be much easier to understand things and detect errors. For example, when something is amiss, you will notice that, somehow, there is something odd in the patterns you are used to seeing, which will catch your attention. You will take a closer look, and you will more than likely spot the mistake!
As stated in the classical book, Code Complete, an interesting analysis of this was done in the paper titled Perceptions in Chess (1973), where an experiment was conducted to identify how different people can understand or memorize different chess positions. The experiment was conducted on players of all levels (novices, intermediate, and chess masters), and with different chess positions on the board. They found out that when the position was random, the novices did as well as the chess masters; it was just a memorization exercise that anyone could do at reasonably the same level. When the positions followed a logical sequence that might occur in a real game (again, consistency, adhering to a pattern), then the chess masters performed exceedingly better than the rest.
Now imagine this same situation applied to software. We, as the software engineer experts in Python, are like the chess masters in the previous example. When the code is structured randomly, without following any logic, or adhering to any standard, then it would be as difficult for us to spot mistakes as a novice developer. On the other hand, if we are used to reading code in a structured fashion, and we have learned to get ideas quickly from the code by following patterns, then we are at a considerable advantage.
In particular, for Python, the sort of coding style you should follow is PEP-8. You can extend it or adopt some of its parts to the particularities of the project you are working on (for example, the length of the line, the notes about strings, and so on).
If you realize the project you're working on doesn't adhere to any coding standard, push for the adoption of PEP-8 in that code base. Ideally, there should be a written document for the company or team you're working in that explains the coding standard that's expected to be followed. These coding guidelines can be an adaptation of PEP-8.
Tip
If you notice there's not an alignment in your team with the code style, and there are several discussions about this during code reviews, it's probably a good idea to revisit the coding guidelines and invest in automatic verification tools.
In particular, PEP-8 touches on some important points for quality traits that you don't want to miss in your project; some of them are:
- Searchability: This refers to the ability to identify tokens in the code at a glance; that is, to search in certain files (and in which part of those files) for the particular string we are looking for. One key point of PEP-8 is that it differentiates the way of writing the assignment of values to variables, from the keyword arguments being passed to functions. To see this better, let's use an example. Let's say we are debugging, and we need to find where the value to a parameter named
location
is being passed. We can run the followinggrep
command, and the result will tell us the file and the line we are looking for:$ grep -nr "location=" . ./core.py:13: location=current_location,
Now, we want to know where this variable is being assigned this value, and the following command will also give us the information we are looking for:
$ grep -nr "location =" ../core.py:10: current_location = get_location()
PEP-8 establishes the convention that, when passing arguments by keyword to a function, we don't use spaces, but we do when we set values to variables. For that reason, we can adapt our search criteria (no spaces around the
=
in the first example, and one space in the second) and be more efficient in our search. That is one of the advantages of following a convention. - Consistency: If the code has a uniform format, the reading of it will be much easier. This is particularly important for onboarding, if you want to welcome new developers to your project, or even hire new (and probably less experienced) programmers on your team, and they need to become familiar with the code (which might even consist of several repositories). It will make their lives much easier if the code layout, documentation, naming convention, and such is identical across all files they open, in all repositories.
- Better error handling: One of the suggestions made in PEP-8 is to limit the amount of code inside a
try/except
block to the minimum possible. This reduces the error surface, in the sense that it reduces the likelihood of accidentally swallowing an exception and masking a bug. This is, arguably, perhaps hard to enforce by automatic checks, but nonetheless something worth keeping an eye on while performing code reviews. - Code quality: By looking at the code in a structured fashion, you will become more proficient at understanding it at a glance (again, like in Perception in Chess), and you will spot bugs and mistakes more easily. In addition to that, tools that check the quality of the code will also hint at potential bugs. Static analysis of the code might help to reduce the ratio of bugs per line of code.
As I mentioned in the introduction, formatting is a necessary part of clean code, but it doesn't end there. There are more considerations to take into account, such as documenting design decisions in the code and using tools to leverage automatic quality checks as much as possible. In the next section, we start with the first one.
Documentation
This section is about documenting code in Python, from within the code. Good code is self-explanatory but is also well-documented. It is a good idea to explain what it is supposed to do (not how).
One important distinction: documenting code is not the same as adding comments to it. This section intends to explore docstrings and annotations because they're the tools in Python used to document code. That said, parenthetically, I will briefly touch on the subject of code comments, just to establish some points that will make a clearer distinction.
Code documentation is important in Python, because being dynamically typed, it might be easy to get lost in the values of variables or objects across functions and methods. For this reason, stating this information will make it easier for future readers of the code.
There is another reason that specifically relates to annotations. They can also help in running some automatic checks, such as type hinting, through tools such as mypy
(http://mypy-lang.org/) or pytype
(https://google.github.io/pytype/). We will find that, in the end, adding annotations pays off.
Code comments
As a general rule, we should aim to have as few code comments as possible. That is because our code should be self-documenting. This means that if we make an effort to use the right abstractions (like dividing the responsibilities in the code throughout meaningful functions or objects), and we name things clearly, then comments shouldn't be needed.
Tip
Before writing a comment, try to see if you can express the same meaning using only code (that is, by adding a new function, or using better variable names).
The opinion stated in this book about comments agrees pretty much with the rest of the literature on software engineering: comments in code are a symptom of our inability to express our code correctly.
However, in some cases, it's impossible to avoid adding a comment in code, and not doing so would be dangerous. This is typically the case when something in the code must be done for a particular technical nuance that's not trivial at first glance (for example, if there's a bug in an underlying external function and we need to pass a special parameter to circumvent the issue). In that case, our mission is to be as concise as possible and explain in the best possible way what the problem is, and why we're taking this specific path in the code so that the reader can understand the situation.
Lastly, there's another kind of comment in code that is definitely bad, and there's just no way to justify it: commented out code. This code must be deleted mercilessly. Remember that code is a communication language among developers and is the ultimate expression of the design. Code is knowledge. Commented out code brings chaos (and most likely contradictions) that will pollute that knowledge.
There's just no good reason, especially now, with modern version control systems, to leave commented out code that can be simply deleted (or stashed elsewhere).
To sum up: code comments are evil. Sometimes a necessary evil, but nonetheless something we should try to avoid as much as possible. Documentation on code, on the other hand, is something different. That refers to documenting the design or architecture within the code itself, to make it clear, and that's a positive force (and also the topic of the next section, in which we discuss docstrings).
Docstrings
In simple terms, we can say that docstrings are documentation embedded in the source code. A docstring is basically a literal string, placed somewhere in the code to document that part of the logic.
Notice the emphasis on the word documentation. This is important because it's meant to represent explanation, not justification. Docstrings are not comments; they are documentation.
Docstrings are intended to provide documentation for a particular component (a module
, class
, method
, or function
) in the code that will be useful for other developers. The idea is that when other engineers want to use the component you're writing, they'll most likely take a look at the docstring to understand how it's supposed to work, what the expected inputs and outputs are, and so on. For this reason, it is a good practice to add docstrings whenever possible.
Docstrings are also useful to document design and architecture decisions. It's probably a good idea to add a docstring to the most important Python modules, functions, and classes in order to hint to the reader how that component fits in the overall architecture.
The reason they are a good thing to have in code (or maybe even required, depending on the standards of your project) is that Python is dynamically typed. This means that, for example, a function can take anything as the value for any of its parameters. Python will not enforce, nor check, anything like this. So, imagine that you find a function in the code that you know you will have to modify. You are even lucky enough that the function has a descriptive name, and that its parameters do as well. It might still not be quite clear what types you should pass to it. Even if this is the case, how are they expected to be used?
Here is where a good docstring might be of help. Documenting the expected input and output of a function is a good practice that will help the readers of that function understand how it is supposed to work.
To run the following code you'll need an IPython
(https://ipython.org/) interactive shell with the version of Python set according to the requirements of this book. If you don't have an IPython
shell, you can still run the same commands in a normal Python shell
, by replacing the <function>??
with help(<function>)
.
Consider this good example from the standard library:
Type: method_descriptor
Here, the docstring for the update
method on dictionaries gives us useful information, and it is telling us that we can use it in different ways:
- We can pass something with a
.keys()
method (for example, another dictionary), and it will update the original dictionary with the keys from the object passed per parameter:>>> d = {}>>> d.update({1: "one", 2: "two"})>>> d{1: "one", 2: 'two'}
- We can pass an iterable of pairs of keys and values, and we will unpack them to
update
:>>> d.update([(3, "three"), (4, "four")])>>> d{1: 'one', 2: 'two', 3: 'three', 4: 'four'}
- It's also telling us that we can update the dictionary with values taken from keyword arguments:
>>> d.update(five=5)>>> d{1: 'one', 2: 'two', 3: 'three', 4: 'four', 'five': 5}
(Note that in this form, the keyword arguments are strings, so we cannot set something in the form 5="five"
as it'd be incorrect.)
This information is crucial for someone who wants to learn and understand how a new function works, and how they can take advantage of it.
Notice that in the first example, we obtained the docstring of the function by using the double question mark on it (dict.update??
). This is a feature of the IPython
interactive interpreter (https://ipython.org/). When this is called, it will print the docstring of the object you are expecting. Now, imagine that in the same way, we obtained help from this function of the standard library; how much easier could you make the lives of your readers (the users of your code), if you place docstrings on the functions you write so that others can understand their workings in the same way?
The docstring is not something separated or isolated from the code. It becomes part of the code, and you can access it. When an object has a docstring defined, this becomes part of it via its __doc__
attribute:
>>> def my_function(): """Run some computation""" return None ...>>> my_function.__doc__ # or help(my_function) 'Run some computation'
This means that it is even possible to access it at runtime and even generate or compile documentation from the source code. In fact, there are tools for that. If you run Sphinx
, it will create the basic scaffold for the documentation of your project. With the autodoc
extension (sphinx.ext.autodoc
) in particular, the tool will take the docstrings from the code and place them in the pages that document the function.
Once you have the tools in place to build the documentation, make it public so that it becomes part of the project itself. For open source projects, you can use read the docs
(https://readthedocs.org/), which will generate the documentation automatically per branch or version (configurable). For companies or projects, you can have the same tools or configure these services on-premise, but regardless of this decision, the important part is that the documentation should be ready and available to all members of the team.
There is, unfortunately, one downside to docstrings, and it is that, as happens with all documentation, it requires manual and constant maintenance. As the code changes, it will have to be updated. Another problem is that for docstrings to be really useful, they have to be detailed, which requires multiple lines. Taking into account these two considerations, if the function you're writing is really simple, and self-explanatory, it's probably better to avoid adding a redundant docstring that will require maintenance later on.
Maintaining proper documentation is a software engineering challenge that we cannot escape from. It also makes sense for it to be like this. If you think about it, the reason for documentation to be manually written is because it is intended to be read by other humans. If it were automated, it would probably not be of much use. For the documentation to be of any value, everyone on the team must agree that it is something that requires manual intervention, hence the effort required. The key is to understand that software is not just about code. The documentation that comes with it is also part of the deliverable. Therefore, when someone is making a change on a function, it is equally important to also update the corresponding part of the documentation to the code that was just changed, regardless of whether it's a wiki, a user manual, a README
file, or several docstrings.
Annotations
PEP-3107 introduced the concept of annotations. The basic idea of them is to hint to the readers of the code about what to expect as values of arguments in functions. The use of the word hint is not casual; annotations enable type hinting, which we will discuss later on in this chapter, after the first introduction to annotations.
Annotations let you specify the expected type of some variables that have been defined. It is actually not only about the types, but any kind of metadata that can help you get a better idea of what that variable actually represents.
Consider the following example:
@dataclassclass Point lat: float long: float def locate(latitude: float, longitude: float) -> Point: """Find an object in the map by its coordinates"""
Here, we use float
to indicate the expected types of latitude
and longitude
. This is merely informative for the reader of the function so that they can get an idea of these expected types. Python will not check these types nor enforce them.
We can also specify the expected type of the returned value of the function. In this case, Point
is a user-defined class, so it will mean that whatever is returned will be an instance of Point
.
However, types or built-ins are not the only kind of thing we can use as annotations. Basically, everything that is valid in the scope of the current Python interpreter could be placed there. For example, a string explaining the intention of the variable, a callable to be used as a callback or validation function, and so on.
We can leverage annotations to make our code more expressive. Consider the following example for a function that is supposed to launch a task, but that also accepts a parameter to defer the execution:
def launch_task(delay_in_seconds): ...
Here, the name of the argument delay_in_seconds
seems quite verbose, but despite that fact, it still doesn't provide much information. What constitutes acceptable good values for seconds? Does it consider fractions?
How about we answer those questions in the code?
Seconds = floatdef launch_task(delay: Seconds): ...
Now the code speaks for itself. Moreover, we can argue that with the introduction of the Seconds
annotation, we have created a small abstraction around how we interpret time in our code, and we can reuse this abstraction in more parts of our code base. If we later decide to change the underlying abstraction for seconds (let's say that from now on, only integers are allowed), we can make that change in a single place.
With the introduction of annotations, a new special attribute is also included, and it is __annotations__
. This will give us access to a dictionary that maps the name of the annotations (as keys in the dictionary) with their corresponding values, which are those we have defined for them. In our example, this will look like the following:
>>> locate.__annotations__{'latitude': <class 'float'>, 'longitude': <class 'float'>, 'return': <class 'Point'>}
We could use this to generate documentation, run validations, or enforce checks in our code if we think we have to.
Speaking of checking the code through annotations, this is when PEP-484 comes into play. This PEP specifies the basics of type hinting; the idea of checking the types of our functions via annotations. Just to be clear again, and quoting PEP-484 itself:
"Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention."
The idea of type hinting is to have extra tools (independent from the interpreter) to check the correct use of types throughout the code and to hint to the user if any incompatibilities are detected. There are useful tools that run checks around the data types and how they're used in our code, in order to find potential problems. Some example tools, such as mypy
and pytype
, are explained in more detail in the Tooling section, where we will talk about using and configuring the tools for the project. For now, you can think of it as a sort of linter that will check the semantics of the types used in code. For this reason, it is a good idea to configure mypy
or pytype
on the project and use it at the same level as the rest of the tools for static analysis.
However, type hinting means more than just a tool for checking the types in our code. Following up from our previous example, we can create meaningful names and abstractions for types in our code. Consider the following case for a function that processes a list of clients. In its simplest form, it can be annotated just using a generic list:
def process_clients(clients: list): ...
We can add a bit more detail if we know that in our current modeling of the data, clients are represented as tuples of integers and text:
def process_clients(clients: list[tuple[int, str]]): ...
But that still doesn't give us enough information, so it's better to be explicit and have a name for that alias, so we don't have to infer what that type means:
from typing import TupleClient = Tuple[int, str]def process_clients(clients: list[Client]): ...
In this case, the meaning is clearer, and it supports evolving datatypes. Perhaps a tuple is the minimal data structure that fits the problem to represent a client correctly, but later on, we will want to change it for another object or create a specific class. And in this case, the annotation will remain correct, and so will all other type verifications.
The basic idea behind this is that now the semantics extend to more meaningful concepts, making it even easier for us (humans) to understand what the code means, or what is expected at a given point.
There is an added benefit that annotations bring. With the introduction of PEP-526 and PEP-557, there is a convenient way of writing classes in a compact way and defining small container objects. The idea is to just declare attributes in a class, and use annotations to set their type, and with the help of the @dataclass
decorator, they will be handled as instance attributes without having to explicitly declare it in the __init__
method and set values to them:
from dataclasses import dataclass@dataclassclass Point: lat: float long: float
>>> Point.__annotations__{'lat': <class 'float'>, 'long': <class 'float'>}>>> Point(1, 2)Point(lat=1, long=2)
Later in the book, we'll explore other important uses of annotations, more related to the design of the code. When we explore good practices for object-oriented design, we might want to use concepts like dependency injection, in which we design our code to depend on interfaces that declare a contract. And probably the best way to declare that code relies on a particular interface is to make use of annotations. More to the point, there are tools that specifically make use of Python annotations to automatically provide support for dependency injection.
In design patterns, we usually also want to decouple parts of our code from specific implementations and rely on abstract interfaces or contracts, to make our code more flexible and extensible. In addition, design patterns usually solve problems by creating the proper abstractions needed (which usually means having new classes that encapsulate part of the logic). In both these scenarios, annotating our code will be of extra help.
Do annotations replace docstrings?
This is a valid question, since in older versions of Python, long before annotations were introduced, the way to document the types of the parameters of functions or attributes was to put docstrings on them. There are even some conventions for formats on how to structure docstrings to include the basic information for a function, including types and the meaning of each parameter, the return value, and possible exceptions that the function might raise.
Most of this has been addressed already in a more compact way by means of annotations, so one might wonder if it is really worth having docstrings as well. The answer is yes, and this is because they complement each other.
It is true that a part of the information previously contained in the docstring can now be moved to the annotations (there's no longer the need to indicate the types of the parameters in the docstrings as we can use annotations). But this should only leave more room for better documentation on the docstring. In particular, for dynamic and nested data types, it is always a good idea to provide examples of the expected data so that we can get a better idea of what we are dealing with.
Consider the following example. Let's say we have a function that expects a dictionary to validate some data:
def data_from_response(response: dict) -> dict: if response["status"] != 200: raise ValueError return {"data": response["payload"]}
Here, we can see a function that takes a dictionary and returns another dictionary. Potentially, it could raise an exception if the value under the key "status"
is not the expected one. However, we do not have much more information about it. For example, what does a correct instance of a response
object look like? What would an instance of result
look like? To answer both of these questions, it would be a good idea to document examples of the data that is expected to be passed in by a parameter and returned by this function.
Let's see if we can explain this better with the help of a docstring:
def data_from_response(response: dict) -> dict: """If the response is OK, return its payload. - response: A dict like:: { "status": 200, # <int> "timestamp": "....", # ISO format string of the current date time "payload": { ... } # dict with the returned data } - Returns a dictionary like:: {"data": { .. } } - Raises: - ValueError if the HTTP status is != 200 """ if response["status"] != 200: raise ValueError return {"data": response["payload"]}
Now, we have a better idea of what is expected to be received and returned by this function. The documentation serves as valuable input, not only for understanding and getting an idea of what is being passed around but also as a valuable source for unit tests. We can derive data like this to use as input, and we know what would be the correct and incorrect values to use on the tests. Actually, the tests also work as actionable documentation for our code, but this will be explained in more detail later on in the book.
The benefit is that now we know what the possible values of the keys are, as well as their types, and we have a more concrete interpretation of what the data looks like. The cost is that, as we mentioned earlier, it takes up a lot of lines, and it needs to be verbose and detailed to be effective.
Tooling
In this section, we will explore how to configure some basic tools and automatically run checks on code, with the goal of leveraging part of the repetitive verification checks.
This is an important point: remember that code is for us, people, to understand, so only we can determine what is good or bad code. We should invest time in code reviews, thinking about what is good code, and how readable and understandable it is. When looking at the code written by a peer, you should ask such questions as:
- Is this code easy to understand and follow to a fellow programmer?
- Does it speak in terms of the domain of the problem?
- Would a new person joining the team be able to understand it, and work with it effectively?
As we saw previously, code formatting, consistent layout, and proper indentation are required but not sufficient traits to have in a code base. Moreover, these are things that we, as engineers with a high sense of quality, would take for granted, so we would read and write code far beyond the basic concepts of its layout. Therefore, we are not willing to waste time reviewing these kinds of items, so we can invest our time more effectively by looking at actual patterns in the code in order to understand its true meaning and provide valuable results.
All of these checks should be automated. They should be part of the tests or checklist, and this, in turn, should be part of the continuous integration build. If these checks do not pass, make the build fail. This is the only way to actually ensure the continuity of the structure of the code at all times. It also serves as an objective parameter for the team to have as a reference. Instead of having some engineers or the leader of the team always having to point out the same comments about PEP-8 on code reviews, the build will automatically fail, making it something objective.
The tools presented in this section will give you an idea of checks you could automatically perform on the code. These tools should enforce some standards. Generally, they're configurable, and it would be perfectly fine for each repository to have its own configuration.
The idea of using tools is to have a repeatable and automatic way of running certain checks. That means that every engineer should be able to run the tools on their local development environment and reach the same results as any other member of the team. And also, that these tools should be configured as part of the Continuous Integration (CI) build.
Checking type consistency
Type consistency is one of the main things we would like to check automatically. Python is dynamically typed, but we can still add type annotations to hint to the readers (and tools) about what to expect in different parts of the code. Even though annotations are optional, as we have seen, adding them is a good idea not only because it makes the code more readable, but also because we can then use annotations along with some tooling to automatically check for some common errors that are most likely bugs.
Since type hinting was introduced in Python, many tools for checking type consistency have been developed. In this section, we'll take a look at two of them:
mypy
(https://github.com/python/mypy), and
pytype
(https://github.com/google/pytype). There are multiple tools, and you might even choose to use a different one, but in general, the same principles apply regardless of the specific tool: the important part is to have an automatic way of validating changes, and adding these validations as part of the CI build.
mypy
is the main tool for optional static type checking in Python. The idea is that, once you install it, it will analyze all of the files in your project, checking for inconsistencies in the use of types. This is useful since, most of the time, it will detect actual bugs early, but sometimes it can give false positives.
You can install it with pip
, and it is recommended to include it as a dependency for the project on the setup file:
$ pip install mypy
Once it is installed in the virtual environment, you just have to run the preceding command and it will report all of the findings on the type checks. Try to adhere to its report as much as possible, because most of the time, the insights provided by it help to avoid errors that might otherwise slip into production. However, the tool is not perfect, so if you think it is reporting a false positive, you can ignore that line with the following marker as a comment:
type_to_ignore = "something" # type: ignore
It's important to note that for this or any tool to be useful, we have to be careful with the type annotations we declare in the code. If we're too generic with the types set, we might miss some cases in which the tool could report legitimate problems.
In the following example, there's a function that is intended to receive a parameter to be iterated over. Originally, any iterable would work, so we want to take advantage of Python's dynamic typing capabilities and allow a function that can use passing lists, tuples, keys of dictionaries, sets, or pretty much anything that supports a for
loop:
def broadcast_notification( message: str, relevant_user_emails: Iterable[str]): for email in relevant_user_emails: logger.info("Sending %r to %r", message, email)
The problem is that if some part of the code passes these parameters by mistake, mypy
won't report an error:
broadcast_notification("welcome", "user1@domain.com")
And of course, this is not a valid instance because it will iterate every character in the string, and try to use it as an email.
If instead, we're more restrictive with the types set for that parameter (let's say to accept only lists or tuples of strings), then running mypy
does identify this erroneous scenario:
$ mypy <file-name>error: Argument 2 to "broadcast_notification" has incompatible type "str"; expected "Union[List[str], Tuple[str]]"
Similarly, pytype
is also configurable and works in a similar fashion, so you can adapt both tools to the specific context of your project. We can see how the error reported by this tool is very similar to the previous case:
File "...", line 22, in <module>: Function broadcast_notification was called with the wrong arguments [wrong-arg-types] Expected: (message, relevant_user_emails: Union[List[str], Tuple[str]]) Actually passed: (message, relevant_user_emails: str)
One key difference that pytype
has though, is that it won't just check the definitions against the arguments, but try to interpret if the code at runtime will be correct, and report on what would be runtime errors. For example, if one of the type definitions is temporarily violated, this won't be considered an issue as long as the end result complies with the type that was declared. While this is a nice trait, in general, I would recommend that you try not to break the invariants you set in the code, and avoid intermediate invalid states as much as possible because that will make your code easier to reason about and rely on fewer side-effects.
Generic validations in code
Besides using tools like the ones introduced in the previous section, to check for errors on the type management of our program, we can use other tools that will provide validations against a wider range of parameters.
There are many tools for checking the structure of code (basically, this is compliance with PEP-8) in Python, such as pycodestyle
(formerly known as pep8
in PyPi
), flake8
, and many more. They are all configurable and are as easy to use as running the command they provide.
These tools are programs that run over a set of Python files, and check the compliance of the code against the PEP-8 standard, reporting every line that is in violation and the indicative error of the rule that got broken.
There are other tools that provide more complete checks so that instead of just validating the compliance with PEP-8, they also include extra checks for more complicated situations that exceed PEP-8 (remember, code can still be utterly compliant with PEP-8 and still not be of good quality).
For example, PEP-8 is mostly about styling and structuring our code, but it doesn't enforce us to put a docstring on every public method
, class
, or module
. It also doesn't say anything about a function that takes too many parameters (something we'll identify as a bad trait later on in the book).
One example of such a tool is pylint
. This is one of the most complete and strict tools there is to validate Python projects, and it's also configurable. As before, to use it, you just have to install it in the virtual environment with pip
:
$ pip install pylint
Then, just running the pylint
command would be enough to check it in the code.
It is possible to configure pylint
via a configuration file named pylintrc
. In this file, you can decide the rules you would like to enable or disable, and parametrize others (for example, to change the maximum length of the column). For example, as we have just discussed, we might not want every single function to have a docstring, as forcing this might be counterproductive. However, by default, pylint
will impose this restriction, but we can overrule it in the configuration file by declaring it:
[DESIGN] disable=missing-function-docstring
Once this configuration file has reached a stable state (meaning that it is aligned with the coding guidelines and doesn't require much further tuning), then it can be copied to the rest of the repositories, where it should also be under version control.
Tip
Document the coding standards agreed by the development team, and then enforce them in configuration files for the tools that will run automatically in the repository.
Finally, there's another tool I would like to mention, and that is Coala
(https://github.com/coala/coala). Coala
is a bit more generic (meaning it supports multiple languages, not just Python), but the idea is similar to the one before: it takes a configuration file, and then it presents a command-line tool that will run some checks on the code. When running, if the tool detects some errors while scanning the files, it might prompt the user about them, and it will suggest automatically applying a fixing patch, when applicable.
But what if I have a use case that's not covered by the default rules of the tools? Both pylint
and Coala
come with lots of predefined rules that cover the most common scenarios, but you might still detect in your organization some pattern that it was found to led to errors.
If you detect a recurrent pattern in the code that is error-prone, I suggest investing some time in defining your own rules. Both these tools are extensible: in the case of pylint
, there are multiple plugins available, and you can write your own. In the case of Coala
, you can write your own validation modules to run right alongside the regular checks.
Automatic formatting
As mentioned at the beginning of the chapter, it would be wise for the team to agree on a writing convention for the code, to avoid discussing personal preferences on pull requests, and focus on the essence of the code. But the agreement would only get you so far, and if these rules aren't enforced, they'll get lost over time.
Besides just checking for adherence to standards by means of tooling, it would be useful to automatically format the code directly.
There are multiple tools that automatically format Python code (for example, most of the tools that validate PEP-8, like flake8
, also have a mode to rewrite the code and make it PEP-8 compliant), and they're also configurable and adaptable to each specific project. Among those, and perhaps because of just the opposite of full flexibility and configuration, is one that I would like to highlight: black
.
black
(https://github.com/psf/black) has a peculiarity that formats code in a unique and deterministic way, without allowing any parameters (except perhaps, the length of the lines).
One example of this is that black
will always format strings using double-quotes, and the order of the parameters will always follow the same structure. This might sound rigid, but it's the only way to ensure the differences in the code are kept to a minimum. If the code always respects the same structure, changes in the code will only show up in pull requests with the actual changes that were made, and no extra cosmetic modifications. It's more restrictive than PEP-8, but it's also convenient because, by formatting the code directly through a tool, we don't have to actually worry about that, and we can focus on the crux of the problem at hand.
It's also the reason black
exists. PEP-8 defines some guidelines to structure our code, but there are multiple ways of having code that is compliant with PEP-8, so there's still the problem of finding style differences. The way black
formats code is by moving it to a stricter subset of PEP-8 that is always deterministic.
As an example, see that the following code is PEP-8 compliant, but it doesn't follow the conventions of black
:
def my_function(name): """ >>> my_function('black') 'received Black' """ return 'received {0}'.format(name.title())
Now, we can run the following command to format the file:
black -l 79 *.py
And we can see what the tool has written:
def my_function(name): """ >>> my_function('black') 'received Black' """ return "received {0}".format(name.title())
On more complex code, a lot more would have changed (trailing commas, and more), but the idea can be seen clearly. Again, it's opinionated, but it's also a good idea to have a tool that takes care of details for us.
It's also something that the Golang community learned a long time ago, to the point that there is a standard tool library, go fmt
, that automatically formats the code according to the conventions of the language. It's good that Python has something like this now.
When installed, the 'black'
command, by default, will attempt to format the code, but it also has a '--check'
option that will validate the file against the standard, and fail the process if it doesn't pass the validation. This command is a good candidate to have as part of the automatic checks and CI process.
It's worth mentioning that black
will format a file thoroughly, and it doesn't support partial formatting (as opposed to other tools). This might be an issue for legacy projects that already have code with a different style because if you want to adopt black
as the formatting standard in your project, you'll most likely have to accept one of these two scenarios:
- Creating a milestone
pull
request that will apply theblack
format to all Python files in the repository. This has the disadvantages of adding a lot of noise and polluting the version control history of the repo. In some cases, your team might decide to accept the risk (depending on how much you rely on thegit
history). - Alternatively, you can rewrite the history with the changes in the code with the
black
format applied. Ingit
, it's possible to rewrite the commits (from the very beginning), by applying some commands on each commit. In this case, we can rewrite each commit after the'black'
formatting has been applied. In the end, it would look like the project has been in the new form from the very beginning, but there are some caveats. For starters, the history of the project was rewritten, so everyone will have to refresh their local copies of the repository. And secondly, depending on the history of your repository, if there are a lot of commits, this process can take a while.
In cases where formatting in the "all-or-nothing" fashion is not acceptable, we can use yapf
(https://github.com/google/yapf), which is another tool that has many differences with respect to black
: it's highly customizable, and it also accepts partial formatting (applying the formatting to only certain regions of the file). yapf
accepts an argument to specify the range of the lines to apply the formatting to. With this, you can configure your editor or IDE (or better yet, set up a git
pre-commit hook), to automatically format the code only on the regions of the code that were just changed. This way, the project can get aligned to the coding standards, at staged intervals, as changes are being made.
To conclude this section on tools that format the code automatically, we can say that black
is a great tool that will push the code toward a canonical standard, and for this reason, you should try to use it in your repositories. There's absolutely no friction with using black
on new repositories that are created, but it's also understandable that for legacy repositories this might become an obstacle. If the team decides that it is just too cumbersome to adopt black
in a legacy repository, then tools such as yapf
could be more suitable.
Setup for automatic checks
In Unix development environments, the most common way of working is through Makefiles. Makefiles are powerful tools that let us configure commands to be run in the project, mostly for compiling, running, and so on. Besides this, we can use a Makefile in the root of our project, with some commands configured to run checks on the formatting and conventions of the code, automatically.
A good approach for this would be to have targets for the tests, and each particular test, and then have another one that runs altogether; for example:
.PHONY: typehinttypehint: mypy --ignore-missing-imports src/.PHONY: testtest: pytest tests/.PHONY: lintlint: pylint src/.PHONY: checklistchecklist: lint typehint test.PHONY: blackblack: black -l 79 *.py.PHONY: cleanclean: find . -type f -name "*.pyc" | xargs rm -fr find . -type d -name __pycache__ | xargs rm -fr
Here, the command we run (both on our development machines and on the CI environment builds) is the following:
make checklist
This will run everything in the following steps:
- It will first check the compliance with the coding guideline (PEP-8, or
black
with the'--check'
parameter, for instance). - Then it will check for the use of types on the code.
- Finally, it will run the tests.
If any of these steps fail, consider the entire process a failure.
These tools (black
, pylint
, mypy
, and many more) can be integrated with the editor or IDE of your choice to make things even easier. It's a good investment to configure your editor to make these kinds of modifications either when saving the file or through a shortcut.
It's worth mentioning that the use of a Makefile
comes in handy for a couple of reasons: first, there is a single and easy way to perform the most repetitive tasks automatically. New members of the team can quickly get onboarded by learning that something like 'make format'
automatically formats the code regardless of the underlying tool (and its parameters) being used. In addition, if it's later decided to change the tool (let's say you're switching over from yapf
to black
), then the same command ('make format'
) would still be valid.
Second, it's good to leverage the Makefile
as much as possible, and that means configuring your CI tool to also call the commands in the Makefile
. This way there is a standardized way of running the main tasks in your project, and we place as little configuration as possible in the CI tool (which again, might change in the future, and that doesn't have to be a major burden).
Summary
We now have a first idea of what clean code is, and a workable interpretation of it, which will serve us as a reference point for the rest of this book.
More importantly, we now understand that clean code is something much more important than the structure and layout of the code. We have to focus on how ideas are represented in the code to see if they are correct. Clean code is about readability, maintainability of the code, keeping technical debt to a minimum, and effectively communicating our ideas in the code so that others can understand what we intended to write in the first place.
However, we discussed that adherence to coding styles or guidelines is important for multiple reasons. We agreed that this is a condition that is necessary, but not sufficient, and since it is a minimal requirement every solid project should comply with, it is clear that it is something we better leave to the tools. Therefore, automating all of these checks becomes critical, and in this regard, we have to keep in mind how to configure tools such as mypy
, pylint
, black
, and others.
The next chapter is going to be more focused on Python-specific code, and how to express our ideas in idiomatic Python. We will explore the idioms in Python that make for more compact and efficient code. In this analysis, we will see that, in general, Python has different ideas or different ways to accomplish things compared to other languages.
References
- PEP-8: https://www.python.org/dev/peps/pep-0008/
mypy
: http://mypy-lang.org/pytype
: https://google.github.io/pytype/- PEP-3107: https://www.python.org/dev/peps/pep-3107/
- PEP-484: https://www.python.org/dev/peps/pep-0484/
- PEP-526: https://www.python.org/dev/peps/pep-0526/
- PEP-557: https://www.python.org/dev/peps/pep-0557/
- PEP-585: https://www.python.org/dev/peps/pep-0585/