In the early days of packaging, everything was very ad hoc. Prior to Python 2.0 (released in 2001), distutils was not in the standard library, there was no PyPI, and generally people weren't expected to be constantly connected to the internet anyway; this is one reason that Python's "batteries included" philosophy was so popular — once you had Python installed you didn't necessarily have to install a bunch of "add-on" packages for it to be useful. When you did install packages, the installation instructions were idiosyncratic and maybe were detailed in the package's README. In Python 2.0, the distutils module was introduced as a standard way to convert Python source code into *nix distro packages, and to give system administrators a standard way to install the package directly onto target systems. The idea was that you'd basically write a "Makefile", but in Python and with a bunch of standard targets already defined. Consistent with Python's "batteries included" philosophy, distutils was included in the standard library, so you could expect any Python 2+ code to be able to run your setup.py package with nothing else installed.
One major problem with this approach, though, is that every Python package must use distutils and only distutils — there was no standard way for a package author to make it clear that you need other packages in order to build or test your package. So when setuptools came around — a drop-in replacement for distutils with various additional features — we were back to the "bad old days" where in order to build your package, you had to explain in the README all the stuff that needs to be installed first. When setuptools got popular enough, you ended up with a situation where setuptools was de facto part of the standard library, because everyone just installed it right away so that they would actually be able to install other packages; but this is not a good long-term solution: if your build required something that is not universally installed (e.g. numpy or Cython), your builds would just break when someone tried to naïvely install your package from source. Similarly, if you wanted to invoke setup.py test, you may have had a different set of dependencies, specifically for tests, and again there was no way to specify these.
The solution to these problems in setuptools was to separate out the requires field into target-specific requirements: install_requires for libraries required at runtime, tests_require for libraries required during the test target, and setup_requires for libraries required to do the build itself. When you ran setup.py, setuptools would globally install whatever was in setup_requires before actually attempting to build your package. Unfortunately, there are several problems with this. The first of which is that you may actually need one of these build requirements before the setup() function is executed; for example, Cython recommends that you do this:
from setuptools import setup
from Cython.Build import cythonize
ext_modules = cythonize("helloworld.pyx")
Note that you are supposed to import cythonize before setup is executed, and without manually installing Cython, this is going to fail because Cython is imported before setuptools even gets the chance to install it!
The problem gets even worse when you consider that your package may have a dependency on a specific version of setuptools. setuptools cannot upgrade or downgrade itself in-place while it's running (or at least it shouldn't), so the best you can do is detect the version of setuptools you need (manually, before you use any of the specific features you need) and throw an exception if this fails. This is the core, unfixable problem with direct invocation of setup.py, and a primary reason why the maintainers of setuptools have abandoned trying to support this — your setup.py is necessarily designed to be invoked in a certain environment, with a certain version of Python and setuptools and target-specific dependencies installed, and there's no way for setup.py to create that environment for itself.
Build frontends and backends
The problem with build-time dependencies did not go unnoticed, and was in fact one of the earliest targets for the latest wave of attempts at standardizing the way Python code is packaged and deployed. PEP 518 introduced the pyproject.toml file and a mechanism to specify your build-time dependencies, so that, e.g. pip can create an environment that, hopefully, can take your sdist and build it into a package. PEP 517 allows you to specify entry points for the build other than setup.py, which introduces the important delineation between build frontends and build backends.
A build backend is something like setuptools or flit, which is a library that knows how to take a source tree and turn it into a distributable artifact — a source distribution or a wheel.
A build frontend is something like pip or build, which is a program (usually a CLI tool) that orchestrates the build environment and invokes the build backend — either constructing an environment in which all build-time dependencies are satisfied, or at least checking to be sure that the environment that the build is being invoked in meets the package's requirements.
In this taxonomy, setuptools has historically been both a backend and a frontend — when you invoke setup.py sdist or setup.py install, it is playing the role of frontend CLI, and when your setup.py contains import setuptools it is playing the role of backend. That said, setuptools is a terrible frontend. It does not implement PEP 517 or PEP 518's requirements for build frontends (and in fact cannot do so using the setup.py interface, as we've already seen), and it only works when your backend is setuptools (which rather defeats the purpose of standardizing these roles ).
Theoretically it is possible for setuptools to grow a new frontend entry-point, so that you can replace all your setup.py <cmd> invocations with something like python -m setuptools <cmd>, but there are several things going against this:
- This still doesn't allow users to directly invoke setup.py, which is what this post is about. End users will still need to undertake a migration away from setup.py invocations.
- setuptools is a hot mess of legacy code and you really don't want to install it in your global Python instance unless you absolutely must. Installing setuptools also installs pkg_resources, and as part of the deprecation of distutils, having setuptools installed at all even when you don't import it can change the behavior of import distutils. This is not to say you shouldn't use setuptools, but when functionality can easily live on its own independent of setuptools, it should.
- Maintaining setuptools is difficult enough, and the very few people who maintain it have very limited availability. In the long run, the project's goal is to focus on its core competency as a build backend, and eliminate extraneous and broken functionality.
At the end of the day, it is much better to replace your setup.py commands with tools purpose-built for this (e.g. build instead of setup.py sdist or setup.py bdist_wheel, tox or nox instead of setup.py test and/or custom commands), backed by projects intending to support the relevant functionality.
Why am I not seeing deprecation warnings?
For several years I've been telling people not to use setup.py and frequently people are confused as to why so much of setuptools is effectively deprecated, but there are no warnings anywhere, and this policy is not officially documented. This is a genuine failing of the project and hopefully at some point this section of the article will be obsolete because all direct setup.py invocations will come with some sort of warning or will have been removed, but I can at least try to explain why doing so is a bit trickier than it seems.
Each not-yet-deprecated command has its own reasons for not having warnings associated with them, but they mostly all fit into one of the following categories:
Many commands still have valid indirect uses: As mentioned earlier, the setuptools code base is not a clean application with well-documented call graphs and good separation of concerns. It's built directly on top of distutils and monkey patches huge sections of distutils (but also has moved distutils directly into its source tree, and it replaces the standard library's distutils when it is installed, in some cases). Broadly speaking, the moves towards standardization that have taken place have been built by adding a layer of abstraction on top of the existing complexity, rather than by refactoring it into something simpler, for example:
- When a project doesn't use PEP 517, pip will set up an environment where it's safe to invoke setup.py directly and do that.
- Under the hood, the bdist_wheel command works by invoking setup.py install into a temporary directory and, copying portions of the result into a zip file and then adding the requisite metadata to create a valid wheel.
- PEP 517 support for setuptools is implemented by invoking setup.py in a subprocess.
Right now, there's no good way to distinguish between people running setup.py install from the command line and people running python -m build on a pyproject.toml-capable project, and as such it's not simple to warn the people doing the wrong thing without also warning the people doing essentially the right thing. This is almost certainly fixable, but it's more complicated than just adding a warning into the install command class.
There isn't a good replacement story: It is incredibly frustrating when a program says "You shouldn't do this anymore" but doesn't have any information about what to do instead — even if the answer is just, "This isn't necessary any more, so stop doing it". For a long time, there was no obvious replacement for setup.py sdist or (to a lesser extent) setup.py bdist_wheel. The pep517 library had a pep517.build command, but it was more of a proof-of-concept and people were reluctant to adopt it, so the setuptools maintainers were in the unenviable position of saying, "Well, we aren't supporting setup.py sdist, but we can't tell you what to use, either." This was solved when the build project was released, but there are some other commands that are in a similarly tenuous position.
Right now the most common targets have replacements: install, sdist, develop and bdist_wheel all have standards backing them, and test can be replaced by tox, nox or even a Makefile. If you use one of the more obscure commands and cannot find a replacement, it would be good to raise that on the packaging discourse to see what replacements are or could be made available.
Lack of resources: At this point, almost all development and maintenance on setuptools is done by Jason R. Coombs, who also maintains nearly 150 other Python packages. The work of actively documenting and deprecating large sections of the interface is tricky and generally thankless (see, e.g. the replies in this thread), so drive-by volunteers are usually neither capable of nor interested in doing it. If your company is interested in funding this work, I recommend getting in touch with the Packaging Working Group or making a directed donation (though it's hard not to be tempted by the long list of other packaging improvements that need funding...).
If it ain't broke?
So far, I've given you reasons why setuptools doesn't want to keep supporting this mode of operation, but you may be asking yourself, "setup.py has been working for years, why do we have to stop using it if we don't care about any of the new features that PEP 517 brings us?" Unfortunately, there's a false premise in this reasoning: what we have now often is not working, and can have subtle and very unfortunate breakages. Over time, the setuptools maintainers been able to rein in some of the worst of these — either by removing functionality or by adding mitigations — but little-to-no effort is devoted to fixing anything that can be fixed by having the end user stop invoking setup.py commands.
One example is that before the upload command was removed, setuptools was sending PyPI the wrong metadata (despite the sdist having the right metadata), which made PyPI ignore the python_requires attribute, causing problems like this. This was particularly insidious because you'd try to do the right thing by specifying the minimum required Python version, with the assumption that people on earlier versions would automatically get the last version of your package with support, but when that information is dropped, they'd instead get a package that setuptools would refuse to install, and the installation would simply fail and there was no recourse except removing the package, because yanking did not yet exist, and uploading a new package with good metadata would make pip dutifully fall back to the last version of the package that says it supports your Python version... the broken package.
Additionally, many of the old code paths that used setup_requires and tests_require would fall back to easy_install (a pre-pip era installer bundled with setuptools), which has/had myriad problems. Often it would make a mess of your metadata and it would not be possible for pip to uninstall the packages installed this way. It did not respect pip's settings about custom certificates and index URLs, so best case scenario users behind some corporate MITM-proxy would get failures whenever one of these code paths are hit. Many of these issues have been mitigated by recent work, but it is primarily work in removing dangerous functionality and adding warnings rather than supporting broken workflows.
As more and more software starts to rely on modern packaging standards, expect to see more breakages from these old code paths. As mentioned in this issue, direct setup.py invocations have effectively been unmaintained for several years now. Whenever someone raises an issue with a reproducer that involves invoking setup.py, the maintainers ask for a reproducer that doesn't hit this code path, and if one can't be found the issue is closed. Put another way, direct invocations of setup.py that currently work for you do so essentially by chance — if something breaks, you are on your own.
So far this article has been fairly bleak because it's all about what's breaking or broken, but there's another way to see this story: this is all a manifestation of creative destruction. These changes are not change for the sake of change, but a sign of a healthy and competitive ecosystem. The real bleak future isn't one without setup.py install and setup.py test, but one where everyone is forced to use setuptools or some bug-for-bug compatible replacement.
When you move away from direct setup.py invocations (note: this does not mean removing setup.py entirely or switching away from setuptools, it just means that you change all your scripts and documentation to tell people not to do this) you get many direct advantages:
- Proper dependency management for your build and test targets .
- You can migrate everything to a single, declarative package configuration file : although you don't have to, for most projects (particularly pure Python projects) you can remove your setup.py file and replace it with setuptools' own declarative configuration format.
- Your builds will use the latest versions of your dependencies: PyPI is plagued by packages built or uploaded with ridiculously out-of-date versions of setuptools and wheel, because "whatever setuptools is installed in my Python environment" is often some ancient version that Debian has patched for inscrutable reasons. These will generate out-of-date metadata files and packages based on old standards, holding back the evolution of the ecosystem.
- Your build and runtime environments will be isolated by default: you really don't want a runtime dependency on setuptools or wheel or Cython, and if you've accidentally picked one up, you need to either fix it or update your install_requires. When you use setup.py install or setup.py sdist, you need to have your build dependencies installed in your global runtime, whereas in pyproject.toml builds, a separate environment is created to build the wheel, and that wheel is installed along with only its runtime dependencies.
- When using standards-based (and supported) workflows, you will make it much easier for downstream packagers and new contributors to use tooling built around the new packaging standards in such a way that they do not rely on the particulars of your build system. If your supported, documented and tested installation workflow is via a PEP 517 backend, you can safely switch from setuptools to flit or another backend and back again without breaking your users' workflows.
In addition to these benefits, you really want to get on top of this sooner rather than later, because this state of affairs is here to stay, and many people are motivated to improve adoption of the new standards. The earlier you undertake the migration away from direct setup.py invocations, the earlier you find your pain points, and you can raise issues on the setuptools issue tracker or the packaging discourse to determine the best course of action. It's much better to raise an issue like, "How do I do X without invoking setup.py?" while there's still time to tweak the standards (or design new standards) and have those standards propagate before setuptools actively removes your current workflow.
For those iron-willed souls who made it this far, I congratulate you on your endurance — truly you are the future rulers of mankind. For everyone else who cheated by skipping to the end hoping for a summary, I'll reward your laziness with bullet points:
- The setuptools project has stopped maintaining all direct invocations of setup.py years ago, and distutils is deprecated. There are undoubtedly many ways that your setup.py-based system is broken today, even if it's not failing loudly or obviously.
- Direct invocations of setup.py cannot bootstrap their own dependencies, and so some CLI is necessary for dependency management.
- The setuptools project no longer wants to provide any public CLI, and will be actively removing the existing interface (though the time scale for this is long).
- PEP 517, 518 and other standards-based packaging are the future of the Python ecosystem and a lot of progress has been made on making this upgrade seamless.
At this point you may be expecting me to give you a canonical list of the right way to do everything that setup.py used to do, and unfortunately the answer here is that it's complicated. Part of the reason for all the churn you've been seeing in the Python packaging landscape over the past several years is the move away from a model where there is one, single authoritative way to do things and towards fostering an environment that allows people to develop different tools that work for their workflow. Thanks to all the standards work that's gone on, there has been a profusion of new packaging projects arising, and you should look to see which ones fit your needs. That said, I can give you some simple "works for most people" recommendations for some of the common commands:
|python -m build (with build)
||pytest (usually via tox or nox)
||pip install -e
||twine upload (with twine)
||twine check (this doesn't do all the same checks but it's a start)
||tox and nox environments.
If there are workflows that you cannot find a replacement for and/or if you'd like to get involved, the Python packaging community is very welcoming, once you get over the fact that they're not going to undeprecate whatever feature you feel you desperately need. Of course, one might also consider trap door spiders very welcoming...