My Python Dev Workflow: Conda + uv for Scientific Computing
Posted on Sun 15 February 2026 in Python • 17 min read
If you work in scientific computing with Python, you already know the pain. You need NumPy compiled against the right BLAS. You need CUDA libraries that match your GPU driver. You need compiled Fortran extensions, system-level dependencies, and a Python version that all of your packages actually support. And then you need to manage fifty pure-Python packages on top of all that.
I have tried most of the mainstream options over the years. virtualenv with requirements.txt. pipenv when it was the hot new thing. Poetry for a long stretch. Conda alone for scientific work. And more recently, uv alone. Each one solves part of the problem and introduces new friction elsewhere. Poetry cannot handle non-Python dependencies. Conda's resolver makes you wait long enough to question your career choices. uv is blazing fast but does not know what to do with CUDA. And every time I switched tools, I spent a day re-learning how to do things I already knew how to do.
After a lot of trial and error -- and several frustrating afternoons spent debugging environment issues instead of writing actual code -- I have settled on a hybrid that I am genuinely happy with: conda for the environment, uv for the packages.
This post is the workflow I wish I had found when I started. It is not a neutral comparison of tools. It is opinionated, it is specific, and it includes the five pitfalls that cost me the most time figuring out.
I am writing it primarily as a reference for future-me, but if you are doing scientific Python work and want a fast, reproducible setup, this might save you some grief. The post assumes basic familiarity with conda and Python packaging concepts, but I have tried to explain the non-obvious bits.
Why Not Just One Tool?
The reason I ended up with two tools instead of one is simple: no single tool handles the full stack well enough.
Conda is excellent at the hard parts. It manages Python versions, CUDA toolkits, system libraries, and compiled packages like NumPy and SciPy. It creates properly isolated environments. If you need libopenblas, cudatoolkit, or a specific Python build, conda handles that without you having to think about compilation flags. For scientific computing, this is non-negotiable.
But conda is slow at the routine parts. Resolving pure-Python dependencies can take minutes. The lockfile story is awkward -- conda-lock exists but it is another tool in the chain. And conda install for packages that are perfectly fine as wheels is overkill. You do not need conda to install requests or click, and waiting 45 seconds for conda's solver to figure that out gets old fast.
uv is the opposite. Developed by the Astral team (the same people behind Ruff), it resolves and installs pure-Python packages absurdly fast -- we are talking 10 to 100 times faster than pip. It produces a proper lockfile (uv.lock) with pinned versions and hashes. It is PEP 621 compatible, so your pyproject.toml stays standard and works with other tools in the ecosystem. But uv does not manage system libraries, CUDA, or compiled scientific packages as well as conda does. It is a package manager, not an environment manager in the conda sense.
The insight: let each tool do what it does best. Conda creates the sandbox -- the Python version, the system libraries, the compiled core. uv fills the sandbox with everything else. No overlap, no conflict. They meet in the middle at the environment directory, connected by a single environment variable.
I think of it like this: conda is the architect who builds the foundation and walls. uv is the interior designer who fills the rooms. You would not ask the architect to pick furniture, and you would not ask the designer to pour concrete.
What About Poetry?
You might ask: why not Poetry? I used Poetry for years and still think it is solid for pure-Python projects. But two things pushed me away for scientific computing work.
First, Poetry's default caret constraints (^numpy>=1.26) silently create version ceilings that cause headaches when you want to upgrade (more on this in the pitfalls section). I have been bitten by this enough times that I now consider it a design flaw rather than a feature.
Second, Poetry cannot manage non-Python dependencies at all. If your project needs libopenblas, cudatoolkit, or a specific HDF5 build, Poetry has no answer. For scientific work, non-Python dependencies are half the battle. Conda handles this natively.
Third -- and this is subjective but it matters more than you would expect -- uv is just faster. Noticeably faster. uv add resolves in seconds where poetry add takes tens of seconds or more. When you are iterating on a dependency list during project setup, that speed difference genuinely changes how you work. You stop batching dependency additions and just add things as you need them. It feels like the difference between an SSD and a spinning disk -- the same operations, but the fast version removes the friction that made you avoid doing them.
If you are curious about Poetry's strengths, I wrote about dynamic versioning with Poetry -- it is still a great tool for the right use case. But for projects where I need conda anyway, uv is a better complement.
The Multi-Machine Problem
There is another reason I landed on this split, and it took me a while to articulate it: conda environments are not portable across machines.
I work across multiple machines, sometimes different operating systems. Git tracks my code, but a development folder accumulates a lot more than committed code -- experimental notebooks, brainstorm notes, scratch scripts, data files. I use rsync to keep these folders in sync. It works beautifully for everything except conda environments.
A conda env is full of OS-specific binaries. rsync a Linux conda env to a Mac and you get a directory of broken symlinks and incompatible shared libraries. Even between two Linux machines with slightly different system libraries, rsynced envs can fail in subtle ways. The environment looks intact but segfaults when you import NumPy.
This is where the conda + uv split pays off. You never rsync the environment itself. You rsync your project files -- including pyproject.toml and uv.lock -- and on the other machine you run conda create -n myproject python=3.12 && uv sync. Fresh environment, identical packages, correct binaries for that OS. The declarative config files travel; the binary artifacts are rebuilt locally. It takes two minutes and it works every time.
This was honestly one of the biggest quality-of-life improvements. I stopped thinking of environments as precious state to preserve and started treating them as disposable artifacts that I regenerate from config. The source of truth is the lockfile, not the env directory.
The Core Trick: One Line That Makes It Work
If you take only one thing away from this post, let it be this section. The entire conda + uv integration hinges on a single environment variable. Add this to your ~/.bashrc (or ~/.zshrc if you use zsh):
# Conda + uv hybrid: tell uv to use the active conda environment
if [ -n "$CONDA_PREFIX" ]; then
export UV_PROJECT_ENVIRONMENT=$CONDA_PREFIX
fi
Here is what happens. When you activate a conda environment, conda sets CONDA_PREFIX to the environment's path. This bashrc snippet detects that and exports UV_PROJECT_ENVIRONMENT, which tells uv to install packages directly into the conda environment instead of creating its own .venv directory.
The result: uv add numpy installs into your conda env. conda list sees it. uv pip list sees it. Everything lives in one place. No .venv directory cluttering your project root. No confusion about which environment is active. No packages disappearing because they were installed into a different environment than the one you thought was active.
It is a conditional export, so it only fires when a conda env is active. It works for every conda environment, not just one. And you set it up once and forget about it.
You can verify it is working:
conda activate myproject
echo $UV_PROJECT_ENVIRONMENT
# Should print: /home/user/anaconda3/envs/myproject
uv add requests
# Installs into the conda env, not a .venv
I wrote a related post about making conda work with uv that covers the VIRTUAL_ENV side of this same problem -- tools that look for VIRTUAL_ENV instead of CONDA_PREFIX. The bashrc trick here is the complementary fix for uv specifically. Between the two, your conda environments become fully visible to the modern Python tooling ecosystem.
If you are wondering whether this is fragile -- it is not. I have been using it for months across multiple projects without issues. The CONDA_PREFIX variable is a stable part of conda's API, and UV_PROJECT_ENVIRONMENT is a documented uv configuration option. Both are unlikely to break.
The Workflow: Bootstrapping a New Project
Before you start, you need three things in place:
- Conda installed and initialised in your shell (Anaconda or Miniconda both work)
- uv installed globally:
pip install uvorpipx install uv - The bashrc export from the previous section
With those set up, here is the actual sequence I follow when starting a new scientific Python project. It looks like a lot of steps written out, but once you have done it twice, it takes about ten minutes.
Step 1: Create the conda environment.
conda create -n myproject python=3.12 -y
conda activate myproject
Use whatever Python version you need, but be deliberate about it. Do not accept the default. This version anchors everything else -- your requires-python, your dependency resolution, your compatibility range. I use 3.12 for most projects right now, but adjust based on what your dependencies support.
Step 2: Initialise the uv project.
uv init --name my-project --no-workspace
rm .python-version # Delete this immediately (see pitfalls)
rm main.py # uv creates a hello-world stub you do not need
The --no-workspace flag prevents uv from trying to detect a parent workspace. Without it, uv walks up the directory tree looking for a pyproject.toml that defines a workspace. If it finds one without a [project] table -- common if you have other projects in parent directories -- it errors out. I always use --no-workspace unless I am explicitly setting up a multi-package workspace.
The .python-version file uv creates is almost always wrong in a conda context -- delete it before it causes trouble. And main.py is a hello-world stub that just clutters your project root.
Step 3: Fix requires-python.
Open pyproject.toml and tighten the Python version range to match your conda environment:
[project]
name = "my-project"
version = "0.1.0"
requires-python = ">=3.12" # Match your conda env exactly
dependencies = []
Do not leave this as >=3.10 or whatever uv defaulted to. If your conda env runs Python 3.12, set >=3.12. This is not just a cosmetic detail -- it directly affects how uv resolves your dependencies. I will explain why in the pitfalls section, but the short version is: uv resolves for every Python version in this range, and a range that is too broad causes phantom failures.
Step 4: Add dependencies.
# Regular dependencies first
uv add numpy pandas scipy matplotlib
# Dev dependencies
uv add --group dev pytest mypy ruff jupyterlab
# Editable local packages LAST (if you have any)
uv add --editable ./libs/my-library
The ordering matters -- regular dependencies first, editable local packages last. This is not just a suggestion. Getting this wrong leads to silent failures where uv appears to succeed but the package is not actually importable. I will explain why in the pitfalls section.
Step 5: Verify and go.
python -c "import numpy; print(numpy.__version__)"
uv pip list
If everything resolves, you are done. Commit your pyproject.toml and uv.lock, and your collaborators can reproduce the environment with conda create + uv sync.
A note on reproducibility: the uv.lock file pins exact versions of every package and their hashes. This is what makes the setup reproducible across machines. Your collaborator creates a fresh conda env with the same Python version, runs uv sync, and gets an identical dependency tree. No "works on my machine" surprises. This is a significant improvement over conda's own lock story, which has been historically awkward (conda-lock exists but adds yet another tool to the chain).
Pitfalls I Hit Along the Way
Most of these come from uv making reasonable assumptions that happen to be wrong in a conda context. Each one cost me at least an hour. Here they are, compressed into the bits that actually matter.
The Phantom Python Version
uv init creates a .python-version file based on your system Python (e.g., 3.10), not your active conda env (3.12). This file takes priority, and uv will refuse to use the conda env because the versions do not match. The error is cryptic: cannot be used because it is not a compatible environment.
Fix: delete .python-version immediately after every uv init. No exceptions. Let uv fall back to requires-python in pyproject.toml, which you control explicitly.
The Python 3.14 Problem
Setting requires-python = ">=3.10" seems reasonable. But uv resolves dependencies for every version in that range -- including Python 3.14, which nobody is running. If any version in the range has a conflict, the entire resolution fails with errors about Python versions you have never used.
Fix: tighten requires-python to match your conda env. If you created python=3.12, set requires-python = ">=3.12". This is the single most confusing uv behaviour in a conda context, and the fix is trivial once you know about it.
The Invisible Rollback
When uv add fails, it atomically rolls back all changes to pyproject.toml and uv.lock. No partial state. The dependency you thought you added is not there. This is actually good design -- no half-broken configs -- but it is disorienting when you do not expect it. If uv add fails, fix the root cause and retry from scratch.
Order Matters: Regular Before Editable
Adding an editable local package before regular PyPI dependencies can silently fail. The package appears in pyproject.toml but is not actually installed -- no error, just ModuleNotFoundError at import time. uv's resolver needs the regular dependency graph established first so it can resolve the editable's transitive dependencies against it.
Fix: always add regular dependencies first, editable ones last. If an editable silently fails to install, this is the first thing to check.
Poetry's Caret Trap
If you depend on a library that uses Poetry, its caret constraints (^numpy>=1.26 means >=1.26, <2.0.0) create invisible version ceilings. Your project wants numpy>=2.4, the library caps at <2.0.0 -- resolution fails. The fix is in the library, not your project: update its constraints. This is one of the reasons I moved away from Poetry for new projects -- the caret default creates hidden upper bounds that only surface when a major version bump happens downstream.
When This Workflow Is Not the Right Fit
I have been fairly enthusiastic in this post, so let me be honest about the limitations. The conda + uv hybrid is excellent for scientific computing projects where you control the deployment environment. It is not the right choice for every Python project.
If you are building a pure-Python library intended for PyPI distribution, you probably do not need conda at all. uv alone (or Poetry, or Hatch) will serve you better. Your users should not need to install conda just to use your package, and your CI/CD pipeline will be simpler without it.
If your deployment target is a Docker container, you might be better off with a simpler pip install -r requirements.txt or a pure uv setup with uv pip compile. Conda adds significant image size (often hundreds of megabytes), and in a container you control the base image and system libraries directly through the Dockerfile. The conda layer becomes redundant overhead.
If your team has standardised on Poetry and everyone is happy with it, switching mid-project creates churn without proportional benefit. Do not fix what is not broken. The conda + uv setup shines most when you are starting fresh or when conda is already part of your workflow for managing non-Python dependencies.
This workflow is for people who are already using conda (or need to for system-level dependencies) and want faster, more modern Python package management layered on top. If that describes you, read on. If it does not, uv by itself is excellent and you should try it.
Quick Reference: The Cheat Sheet
Here is the condensed bootstrap sequence. Copy this, adapt the names, and you are off. I keep a version of this in my project template so I do not have to remember the order.
# 1. Create and activate conda env
conda create -n myproject python=3.12 -y
conda activate myproject
# 2. Initialise uv project (in your project directory)
uv init --name my-project --no-workspace
rm .python-version main.py # Clean up uv init artifacts
# 3. Fix requires-python in pyproject.toml
# Set: requires-python = ">=3.12"
# 4. Add dependencies (order matters!)
uv add numpy pandas scipy matplotlib
uv add --group dev pytest mypy ruff jupyterlab
# 5. Add editable local packages (if any) -- LAST
uv add --editable ./libs/my-library
# 6. Verify
python -c "import numpy; print(numpy.__version__)"
uv pip list
# 7. Ongoing: sync after pulling changes
conda activate myproject
uv sync
What I Learned
This workflow took me several frustrating sessions to figure out. Most of the time was spent on the pitfalls above -- each one felt like a dead end until I understood what was actually happening under the hood. The Python packaging ecosystem has a reputation for being painful, and honestly, a lot of that reputation is earned. But most of the pain comes from trying to make one tool do everything.
The key insight, the one that made everything click, is simple: let each tool do what it is best at. Conda handles the things that are hard to get right -- Python versions, system libraries, compiled packages. uv handles the things that need to be fast -- dependency resolution, lockfiles, pure-Python packages. They do not step on each other because UV_PROJECT_ENVIRONMENT gives them a shared target. Once I stopped trying to find the one perfect tool and accepted that the answer was two tools with a clean boundary, everything got easier.
Now it is ten minutes to bootstrap a new project, and the environment is reproducible from pyproject.toml + uv.lock + a single conda create command. That is a massive improvement over where I started, and it lets me spend my time on the actual work instead of fighting my tools.
The Python packaging ecosystem has been the butt of jokes for years -- and not without reason. But the current generation of tools, uv especially, is genuinely better than what came before. Pairing uv with conda takes the best of both worlds and sidesteps most of the historical pain points. It is not perfect. But it works, and these days that is enough for me.
I am still figuring out the edges. Workspace support, monorepo setups, CI pipelines where conda is not available -- those are problems I have not fully solved yet. The Python packaging ecosystem keeps evolving, and uv itself is changing rapidly. What works today might need adjustment in six months.
But for the day-to-day workflow of "I need a Python environment for a scientific computing project, and I need it to work reliably," this is the most solid setup I have found. The pitfalls are real, but they are all solvable once you know they exist. And that is really what this post is about -- not the perfect workflow, but the one I have actually battle-tested enough to trust.
If you try this and hit a pitfall I did not cover, I would genuinely like to hear about it. The Python packaging space moves fast, and there are almost certainly new sharp edges forming as uv continues to evolve. What I can say is that after six months of using this setup across multiple projects, I have not gone back. And that is more than I can say for most Python packaging workflows I have tried.