~/blog/mac-m5-pro-setup-for-ai

The M5 Pro Setup: Same Mac, Different Era

12 min read

A few years ago I wrote a Mac setup post called "It's Not Much, But It's Mine." It was the standard shape of a good developer laptop: Homebrew, iTerm2, Oh My Zsh, VS Code, a few quality-of-life tweaks, and nothing too dramatic.

This machine is different.

The M5 Pro with 48GB is still a MacBook. It still runs a terminal, an editor, Docker, and Git. But that description is now incomplete. The laptop is no longer just where I write code. It is where I orchestrate parallel agents, run local models, switch across cloud identities safely, and keep enough state visible that I do not deploy into the wrong account at 11pm.

That changes the setup.

The tool categories barely moved. The reason for choosing them did.


Why 48GB actually matters

I do not care about RAM as a vanity metric. I care about how much work I can hold in motion before the machine starts asking me to compromise.

On a modern engineering setup, 48GB is not just "more Chrome tabs". It buys three different things.

1. Parallel development without constant trade-offs

The obvious one first: multiple agent sessions across separate git worktrees, with each session carrying a meaningful context window, while the rest of the machine is still usable.

That means:

  • several Claude Code or Codex sessions in parallel
  • Ghostty or tmux panes open for each stream of work
  • Docker or OrbStack running in the background
  • dashboards, docs, and PRs in the browser
  • Kubernetes, Terraform, and cloud CLIs all active at once

On 16GB, you can do this in bursts. On 24GB, you can do it until the session gets heavy. On 48GB, the machine mostly stops being the bottleneck.

2. Local models that are actually useful

The second reason is the one people still understate: 48GB makes local AI less of a novelty and more of an operational option.

You are not limited to toy models. You can comfortably serve smaller and mid-sized models locally for code, chat, embeddings, and experimentation, while still keeping your normal workstation open. And if you want a quick reality check on what fits, CanIRun.ai is a genuinely useful sanity-check tool.

At the time of writing, it lists examples like:

  • GPT-OSS 20B at roughly 10.8GB
  • Gemma 3 27B at roughly 13.8GB
  • Qwen 2.5 Coder 32B at roughly 16.4GB
  • Llama 3.3 70B at roughly 35.9GB as a much tighter fit

That is the real shift. A 48GB Mac is not just capable of calling remote APIs faster. It can host serious local inference, run evaluations, and let you choose when privacy, latency, or cost make local models the better answer.

3. Small-scale model work without renting a GPU for everything

No, this is not a substitute for training frontier models. That is not the point.

But it is enough for adapter-style fine-tuning, local experimentation, quantisation tests, prompt and eval loops, embedding pipelines, and learning workflows that would otherwise require immediately jumping to cloud GPUs. That changes how fast you can iterate.

If the machine is only for writing TypeScript, 24GB is often enough. If it is a local AI workstation, a parallel agent host, and a platform engineering control surface, 48GB is the right spec.


The workflow I actually bought this for

This is the part that would not have appeared in my old setup post, because the workflow did not exist yet in this form.

The pattern is simple:

  1. split work into separate git worktrees
  2. run an agent in each isolated directory
  3. keep one pane for review, integration, and verification
  4. use the terminal as the orchestration layer
# isolated worktrees
git worktree add ../project-feature-auth feature/auth-refactor
git worktree add ../project-bugfix-api bugfix/api-rate-limiting
 
# start the terminal fabric
tmux new-session -s agents
 
# pane 1
cd ../project-feature-auth && claude
 
# pane 2
cd ../project-bugfix-api && codex
# isolated worktrees
git worktree add ../project-feature-auth feature/auth-refactor
git worktree add ../project-bugfix-api bugfix/api-rate-limiting
 
# start the terminal fabric
tmux new-session -s agents
 
# pane 1
cd ../project-feature-auth && claude
 
# pane 2
cd ../project-bugfix-api && codex

This is why the machine spec matters. Each session holds context, each worktree isolates risk, and the laptop needs enough headroom that review work does not fight with execution work.

The old setup was optimised for "I am the only process doing meaningful engineering". This one is optimised for "I am supervising several streams of engineering work at once".


Foundation

The base layer is still boring, which is a compliment.

xcode-select --install
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew doctor
xcode-select --install
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew doctor

Nothing clever goes here. These come first, everything else depends on them.

One practical note: brew bundle is now built into Homebrew core. If you still have tap "homebrew/bundle" in an older Brewfile, remove it.


What goes in the Brewfile

I no longer think the most useful setup post is a 200-line package manifest pasted into a blog post. The exact list belongs in the dotfiles repo. What matters here is the shape of the stack and the decisions behind it.

The core install set looks like this:

brew "mise"
brew "tmux"
brew "ripgrep"
brew "bat"
brew "eza"
brew "fd"
brew "fzf"
brew "zoxide"
brew "atuin"
brew "git"
brew "gh"
brew "git-delta"
brew "lazygit"
brew "kubectl"
brew "helm"
brew "k9s"
brew "stern"
brew "awscli"
brew "aws-vault"
brew "azure-cli"
brew "ollama"
brew "codex"
brew "gemini-cli"
brew "chezmoi"
 
cask "ghostty"
cask "zed"
cask "visual-studio-code"
cask "orbstack"
cask "gcloud-cli"
cask "1password"
cask "raycast"
cask "arc"
brew "mise"
brew "tmux"
brew "ripgrep"
brew "bat"
brew "eza"
brew "fd"
brew "fzf"
brew "zoxide"
brew "atuin"
brew "git"
brew "gh"
brew "git-delta"
brew "lazygit"
brew "kubectl"
brew "helm"
brew "k9s"
brew "stern"
brew "awscli"
brew "aws-vault"
brew "azure-cli"
brew "ollama"
brew "codex"
brew "gemini-cli"
brew "chezmoi"
 
cask "ghostty"
cask "zed"
cask "visual-studio-code"
cask "orbstack"
cask "gcloud-cli"
cask "1password"
cask "raycast"
cask "arc"

Two deliberate exceptions:

Claude Code should use the native installer so it auto-updates cleanly.

curl -fsSL https://claude.ai/install.sh | bash
curl -fsSL https://claude.ai/install.sh | bash

Terraform and Terragrunt are version-sensitive enough that they belong under mise, not as globally drifting Homebrew packages.

mise use --global terraform@latest
mise use --global terragrunt@latest
mise use --global terraform@latest
mise use --global terragrunt@latest

Terminal: Ghostty

The old post hedged between iTerm2 and Terminal. This setup does not hedge. Ghostty is the right terminal for this machine.

brew install --cask ghostty
brew install --cask ghostty

Why:

  • it feels native instead of web-wrapped
  • it starts immediately
  • multi-line prompts work without fighting the app
  • it pairs cleanly with tmux
  • it stays out of the way when you have several panes open all day

The most useful non-obvious detail is that Ghostty does not support inline comments in its config. If you put # comment on the same line as a value, it is treated as part of the value.

The full theme is in my dotfiles, but these are the settings that matter most:

font-family = "JetBrainsMono Nerd Font"
font-size = 13.5
background-opacity = 0.92
background-blur = 20
window-padding-x = 10
window-padding-y = 8
macos-titlebar-style = hidden
cursor-style = bar
shell-integration = detect
keybind = shift+cmd+t=toggle_background_opacity
font-family = "JetBrainsMono Nerd Font"
font-size = 13.5
background-opacity = 0.92
background-blur = 20
window-padding-x = 10
window-padding-y = 8
macos-titlebar-style = hidden
cursor-style = bar
shell-integration = detect
keybind = shift+cmd+t=toggle_background_opacity

The point is not the theme. The point is keeping the terminal readable, fast, and pleasant enough that it can be the centre of the workflow without becoming exhausting.


Editor choice: Zed first, VS Code when required

Zed has become the default editor because it matches the rest of the machine: fast, minimal overhead, and pleasant under constant use.

VS Code still stays installed because the ecosystem tail is real. Sometimes an extension, a remote workflow, or a project-specific expectation makes it the pragmatic choice. But it is no longer the editor I open by default.

That distinction matters. The default tool should be the one that feels light enough to open fifty times a day.


Zsh + Starship, but with less ceremony

I moved away from Oh My Zsh and Powerlevel10k.

Not because they are bad, but because I no longer want a framework-shaped shell. Starship gives me the prompt I want with less moving parts, one TOML file, and less upgrade friction.

brew install starship
echo 'eval "$(starship init zsh)"' >> ~/.zshrc
brew install starship
echo 'eval "$(starship init zsh)"' >> ~/.zshrc

The important operational detail is load order. If the shell initialisation order is wrong, completions become unreliable and you get lovely little errors like compdef failing on startup.

This is the part worth keeping:

# 1. completion system first
autoload -Uz compinit
if [[ -n ~/.zcompdump(#qN.mh+24) ]]; then
  compinit
else
  compinit -C
fi
 
# 2. Homebrew
eval "$(/opt/homebrew/bin/brew shellenv)"
 
# 3. plugins
source $(brew --prefix)/share/zsh-autosuggestions/zsh-autosuggestions.zsh
source $(brew --prefix)/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
 
# 4. tool activation
eval "$(mise activate zsh)"
eval "$(starship init zsh)"
eval "$(zoxide init zsh)"
eval "$(atuin init zsh)"
source <(fzf --zsh)
# 1. completion system first
autoload -Uz compinit
if [[ -n ~/.zcompdump(#qN.mh+24) ]]; then
  compinit
else
  compinit -C
fi
 
# 2. Homebrew
eval "$(/opt/homebrew/bin/brew shellenv)"
 
# 3. plugins
source $(brew --prefix)/share/zsh-autosuggestions/zsh-autosuggestions.zsh
source $(brew --prefix)/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
 
# 4. tool activation
eval "$(mise activate zsh)"
eval "$(starship init zsh)"
eval "$(zoxide init zsh)"
eval "$(atuin init zsh)"
source <(fzf --zsh)

The prompt itself is two-line and context-heavy. The useful bit is not the aesthetics. It is that cloud and Kubernetes context only surface when active, so the shell tells me where I am before I do something expensive or destructive.


mise replaced four different version managers

This is probably the single most operationally valuable change in the whole setup.

The old pattern was familiar and annoying:

  • nvm for Node
  • pyenv for Python
  • goenv for Go
  • tfenv for Terraform

Four tools. Four config formats. Four activation models. Four opportunities to forget what environment you are in.

mise replaces all of them.

curl https://mise.run | sh
echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc
curl https://mise.run | sh
echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc

Global versions stay simple:

mise use --global go@latest
mise use --global node@lts
mise use --global python@3.13
mise use --global terraform@latest
mise use --global terragrunt@latest
mise use --global kubectl@latest
mise use --global go@latest
mise use --global node@lts
mise use --global python@3.13
mise use --global terraform@latest
mise use --global terragrunt@latest
mise use --global kubectl@latest

Where it gets genuinely good is per-project activation:

[tools]
go = "1.23"
node = "22"
terraform = "1.9"
kubectl = "1.31"
 
[env]
CLOUDSDK_ACTIVE_CONFIG_NAME = "my-ai-cluster-dev"
AWS_PROFILE = "personal"
KUBECONFIG = "~/.kube/personal.yaml"
[tools]
go = "1.23"
node = "22"
terraform = "1.9"
kubectl = "1.31"
 
[env]
CLOUDSDK_ACTIVE_CONFIG_NAME = "my-ai-cluster-dev"
AWS_PROFILE = "personal"
KUBECONFIG = "~/.kube/personal.yaml"

Now changing directory can switch:

  • language and CLI versions
  • cloud identity
  • Kubernetes config
  • Terraform context

That means the machine becomes context-aware instead of depending on memory and luck.

This is one of the big themes of the entire setup: fewer hidden states, more visible states.


Signed commits with separate SSH keys

SSH signing is the right answer over GPG for most developers now.

Less ceremony. Less agent pain. Less keychain nonsense. GitHub and GitLab both support it properly. More importantly, it is easy enough that you will actually keep it configured.

The setup principle is simple:

  • one SSH key for authentication
  • one SSH key for commit signing

That separation matters because auth keys rotate for operational reasons. Signing keys should stay stable for trust and history.

The minimum Git config looks like this:

git config --global gpg.format ssh
git config --global user.signingkey ~/.ssh/id_ed25519_signing.pub
git config --global commit.gpgsign true
git config --global tag.gpgsign true
git config --global gpg.ssh.allowedSignersFile ~/.config/git/allowed_signers
git config --global gpg.format ssh
git config --global user.signingkey ~/.ssh/id_ed25519_signing.pub
git config --global commit.gpgsign true
git config --global tag.gpgsign true
git config --global gpg.ssh.allowedSignersFile ~/.config/git/allowed_signers

And the allowed_signers entry should be explicit:

you@example.com namespaces="git" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA...
you@example.com namespaces="git" ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA...

That namespaces="git" part is not decoration. It makes the purpose of the key unambiguous.

If you keep multiple keys loaded, make the SSH client explicit too:

Host github.com
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_ed25519
  IdentitiesOnly yes
  AddKeysToAgent yes
  UseKeychain yes
Host github.com
  HostName github.com
  User git
  IdentityFile ~/.ssh/id_ed25519
  IdentitiesOnly yes
  AddKeysToAgent yes
  UseKeychain yes

I prefer setups that survive rotation and scale without becoming mysterious. This is one of them.


The Rust CLI stack is not aesthetic, it is ergonomic

I still replace a lot of standard Unix tools:

brew install ripgrep bat eza fd fzf zoxide git-delta lazygit atuin
brew install ripgrep bat eza fd fzf zoxide git-delta lazygit atuin

And yes, the aliases are predictable:

alias cat='bat'
alias ls='eza --icons --git'
alias ll='eza -la --icons --git'
alias lt='eza --tree --icons --git'
alias grep='rg'
alias find='fd'
alias cd='z'
alias cat='bat'
alias ls='eza --icons --git'
alias ll='eza -la --icons --git'
alias lt='eza --tree --icons --git'
alias grep='rg'
alias find='fd'
alias cd='z'

But the point is not "Rust tools are cool". The point is reducing friction in the commands I use hundreds of times a day.

Two tools matter more than the others:

atuin turns shell history into an actual database, which means finding the exact kubectl, gcloud, or aws command from three weeks ago stops being a memory test.

git-delta makes review faster and less tiring, which matters more in an agent workflow because you spend more time reviewing generated changes.

[core]
    pager = delta
 
[delta]
    navigate = true
    side-by-side = true
    syntax-theme = Catppuccin-mocha
[core]
    pager = delta
 
[delta]
    navigate = true
    side-by-side = true
    syntax-theme = Catppuccin-mocha

The self-documenting shell is still worth keeping

This idea survived from the previous setup because it solves a real problem.

I keep a cheat function that reads my own ~/.zshrc, extracts aliases and functions tagged with #@ category: description, and renders them as a searchable reference.

cheat
cheat k8s
cheat aws
cheat git
cheat
cheat k8s
cheat aws
cheat git

The tagging looks like this:

#@ k8s: get pods all namespaces
alias kgpa="kubectl get pods --all-namespaces"
 
#@ util: decode k8s secret
kdecode() { ... }
#@ k8s: get pods all namespaces
alias kgpa="kubectl get pods --all-namespaces"
 
#@ util: decode k8s secret
kdecode() { ... }

This matters because shell setups accumulate folklore. Six months later you know you solved a problem before, but not what you named the alias or function. cheat turns your shell into a tool that explains itself.

The companion function is ctx, which prints the active platform context in one place:

ctx
 
  ☸️  K8s Context:   gke_my-ai-cluster-dev_europe-west2_main
  ☸️  K8s Namespace: platform
  ☁️  GCP Account:   emre.cavunt@mygcpaccount.com
  ☁️  GCP Project:   cluster-ai-dev
  🔶 AWS Profile:   personal
  🔷 Azure Sub:     none
  🏗️  TF Workspace:  default
ctx
 
  ☸️  K8s Context:   gke_my-ai-cluster-dev_europe-west2_main
  ☸️  K8s Namespace: platform
  ☁️  GCP Account:   emre.cavunt@mygcpaccount.com
  ☁️  GCP Project:   cluster-ai-dev
  🔶 AWS Profile:   personal
  🔷 Azure Sub:     none
  🏗️  TF Workspace:  default

That one command has prevented enough context mistakes that I now consider it mandatory.

The full shell lives in my dotfiles repository.


The machine did not get more complicated, the job did

Xcode CLI tools are still there. Homebrew is still the foundation. ssh-keygen -t ed25519 is still the first move. rg is still muscle memory.

What changed is the nature of the work.

The terminal is no longer just where commands happen. It is where context is surfaced, agents are orchestrated, cloud identities are constrained, local models are run, and review happens in parallel with execution.

That is why this setup looks different from my previous one, even when some of the tool names are the same.

The old machine was a development laptop.

This one is a workstation for agentic engineering.


Dotfiles

Everything in this post is managed through chezmoi, so reproducing the setup on a new machine is deliberately boring:

brew install chezmoi
chezmoi init --apply https://github.com/emrecavunt/dotfiles
brew install chezmoi
chezmoi init --apply https://github.com/emrecavunt/dotfiles

Two commands, then tweak whatever is personal.