Apple Silicon, Docker, and ML Environments

devops
ml
docker
Author

Ravi Kalia

Published

April 7, 2025

Apple Silicon, Docker, and ML Environments

Apple Silicon and its arm64 architecture (M1/M2/M3) brings serious battery life and performance gains, but not without friction for machine learning workflows — especially with Docker, PyTorch, and conda.

The main issues stem from:

  • Docker: Many images are built for x86_64 (Intel/AMD), not ARM64 (Apple Silicon).
  • PyTorch: Native support is limited to CPU; GPU support is experimental.
  • Conda: The default Anaconda distribution is not optimized for ARM64.
  • Ecosystem: Many libraries and tools are built with x86_64 in mind. Particularly those making use of FORTRAN, CUDA or C.

Docker: Architecture Mismatch on Many Images

Apple Silicon uses an ARM64 architecture, while most Docker containers are built for x86_64 (Intel/AMD). This mismatch causes:

  • Emulation Overhead: Docker Desktop uses qemu to emulate x86_64, slowing down builds and inference.
  • Image Compatibility Issues: Many ML containers (e.g., with CUDA) are x86_64-only.
  • CI/CD Drift: Production and CI environments typically run on x86_64 Linux, leading to potential inconsistencies.
docker pull pytorch/pytorch:latest

However, things are improving rapidly. Docker now supports multi-architecture for many images, and popular images are being updated to support ARM natively.

PyTorch on Apple Silicon

PyTorch supports Apple Silicon natively via CPU wheels. Metal GPU support is experimental and incomplete.

Native CPU-only install

pip install torch torchvision torchaudio
  • Works for many CPU-based workflows.
  • No CUDA, slow for training large models.

If you’re using torchvision, torchaudio, or other compiled extensions, you may need to install special ARM builds or compile from source.

Conda, Miniforge & Environments

Miniforge/Mambaforge is the best conda-style environment manager on ARM:

  • Built for arm64, unlike the default Anaconda.
  • Works well with pytorch, numpy, scipy, pandas, etc.
  • Use pip for PyTorch, conda/mamba for everything else.

Example conda env

conda create -n ml python=3.10 numpy pandas scikit-learn
pip install torch torchvision

Why Windows and Linux Handle This Better

Intel/AMD (x86_64) is the default for most open-source ML/DL tooling. On Linux and Windows (Intel):

  • Docker images run natively on linux containers (no emulation).
  • CUDA is fully supported with NVIDIA GPUs.
  • Ecosystem is battle-tested for x86_64.

Apple Silicon breaks these assumptions.

Summary

Apple Silicon is great for battery and local dev—but for ML, Docker, and PyTorch workflows, expect hiccups. Use native tools (miniforge, pip, ARM base images) and avoid relying on GPU-accelerated Docker images unless you’re on Intel/Linux. Even then deep dependencies like Fortran, CUDA, and C can be a pain.