Whenever I publish something about my Python Docker workflows, I invariably get challenged about whether it makes sense to use virtual environments in Docker containers. As always, it’s a trade-off, and I err on the side of standards and predictability.
I am keenly aware that – especially with single-purpose Python containers like python:.*
– it’s popular to achieve multi-stage builds by either installing everything globally and COPY
over site-packages
from the guts of the Python installation, or set the PYTHONUSERBASE
environment variable to something like /app
, and use pip install --user
instead of pip install
, and then COPY
over that. All under the premise that it’s somehow simpler and one layer of isolation is plenty.
I do not dispute the sufficiency of one layer of isolation, but over the years, I have found that in production, there’s more than one aspect to keep in mind when talking about simplicity.
As an overarching theme, my goal is not to mindlessly follow some ~best practices~ that add complexity for questionable payoffs because a big tech developer advocate said so at a conference. But I spend a lot of time thinking about the secondary effects of what I do.
So, here’s my incomplete list of reasons why I still use virtual environments and plan to do so in the future:
Predictability and familiarity
Their structure is well-defined and designed to hold a single Python application. I love how they’re a directory hierarchy that includes bin
, lib
, and share
directories, making it a great “container” for self-contained applications by default. It invites the storage of auxiliary files, including the odd config file in an additional etc
directory. All that makes them a great fit for something to put into /opt/app
or /app
.
Yes, unlike ten years ago, you don’t have to isolate your applications from your system Python anymore. But keeping your code in an isolated, well-defined location and structure has value in itself.
I’m responsible for dozens of services, so I appreciate the consistency of knowing that everything I’m deploying is in /app
, and if it’s a Python application, I know it’s a virtual environment, and if I run /app/bin/python
, I get the virtual environment’s Python with my application ready to be imported and run.
Standards and communication
All this consistency makes communication within and across teams easier. Everyone knows what it means if I say/document that a deployment artifact is a virtual environment. And if they don’t, the internet is full of documentation – including the official one. It’s like when Black made us stop thinking and arguing about code style and freed those mental resources for more important things1. That’s the ultimate value of standards, even if one doesn’t like everything about them.
And virtual environments have been a core Python feature for 12 years and a core concept in the Python community since the mid-2000s. It’s the closest thing we have to an enclosed, standardized, and well-understood application build artifact in Python. It’s a stretched analogy, but I think of them as the result of linking a dynamic binary in compiled languages.
You can use them locally, you can deploy them using a distro package, and you can deploy them using a Docker container. It’s good to use the same tools and primitives in development and in production. It means you know your tools and have fewer things to keep in your mind. Therefore, using a different method to deploy your application should better come with tangible upsides.
Import complexity collapse
It’s a general upside of virtual environments – both locally and in Docker – that you can narrow down the relevant search paths where Python will look for your code to import2. Running Python in isolated mode by passing -I
will narrow this down even further.
So, if you a) never install anything globally3, and you b) use the python
binary from the virtual environment while passing it -I
, you know everything that’s not in the standard library must be in the virtual environment. That makes Python’s import behavior much more predictable and debugging import issues less of a murder mystery.
Which brings us to…
A bonus point that can safely be ignored but that I need to get off my chest
Hell hath no fury like how I feel about pip install --user
. It’s an attractive nuisance that has done much of the damage to Python’s bad packaging reputation.
It’s mainly used to paper over the even worse effects of manipulating the system’s site-packages
directory – the most common cause of broken Python installations. All to enable user-local installations4, which are a bad idea to begin with. In that sense, the npm ecosystem with its project-directory-local-first packaging was far ahead of us, stuck in our old, lazy ways.
In the end, it made reasoning about Python installations even more complicated by adding another moving part. If I had a dime for every time someone yelled at me on a bug tracker caused by weird stuff in their user-local Python packages, I wouldn’t have to beg for sponsorships. As much as I love PDM, I thank Cthulhu every day that PEP 582 / __pypackages__
was rejected because it would’ve added another vector of confusion. Standardizing on project-local .venv
was the unequivocally right move, even if I preferred to store my virtual environments in a central place.
Finally: Like, … why at all?
What problem are you exactly trying to solve here? It’s not an extra tool or an extra concept – either it’s right there in the standard library, or you’re using uv already. With uv venv
, creating a virtual environment doesn’t take noticeably longer than a mkdir
. They might be slightly bigger, but are they in a way that actually matters? You’re not making anything simpler by avoiding the one standard way to isolate Python projects with all its dependencies into an easy-to-handle directory.
I wonder how much of the resistance against virtual environments has no technical reasons but is rooted in Homebrew breaking them with every Python update and Debian making them a pain to use with their annoying unbundling of Python installations.
I hope uv’s convenience and speed will change their public perception.
Epilogue
The shortcuts I described at the beginning of the article rely on assumptions about site-packages
’s location and portability. They introduce complications like using pip install --user
(which is semantically odd in Docker and was never meant to be used for something like this) and may require unfamiliar tools and paradigms compared to what you’re using in development.
But I’m not trying to convince you to do anything. I do realize some of my reasons are on the nontangible ~vibes~ side. You’re free to do what you want if you’re within the parameters of those shortcuts and are happy with the trade-offs. But I hope I’ve shown you that it definitively does make sense to use virtual environments in Docker in general – whether it makes sense for you, is for you to decide. I’m just sick of defending myself every single time, so I’m making my case here, once and for all.
If after all this you’re still interested in what I have to say about Python and Docker, I recommend starting with Production-ready Docker Containers with uv, which is a living document of how I build Docker containers for Python applications, as fast as possible.
P.S. I can’t believe I’m publishing this almost to the day on the tenth anniversary of my venerable virtualenv Lives!! A lot has changed for the better, but virtual environments keep giving.
Static types. Łukasz made Black, so we have the mental capacity to learn about type hints. You heard it here first. ↩︎
Introspectable with
python -m site
. ↩︎You can tell Pip to fail if accidentally asked to install globally. uv is virtual environment-only by default. ↩︎
Global to the user but isolated from the system; on UNIX systems, usually under
~/.local/
. ↩︎