Sunday, May 10, 2026

Using X API with Python in 2026: No Free Tier, Auth Traps, tweepy v1.1+v2

This article is for developers looking to add X (formerly Twitter) auto-posting to their personal blog or bot. As of May 2026, X API has no free tier, and there are two authentication traps that aren't well-documented. Here's everything I ran into while connecting my own system — including the tweepy v1.1 / v2 split that's still unavoidable today.

What you'll be able to do

  • Auto-post to X with text and images using Python + tweepy
  • Understand the 2026 X API pricing (no free tier, pay-per-use)
  • Fix 401 and 402 errors when they hit you
  • Estimate your monthly cost before committing

What you need

  • tweepy (Python X API wrapper)
  • Python 3.11+
  • An X Developer Portal account (credit card required)

The state of X API in 2026

Short version: there is no free tier.

Older tutorials confidently say "Free tier works fine for personal bots." That's no longer true. When you open X Developer Portal today, the only option is the Basic plan with pay-per-use billing.

Pricing

  • $0.20 per tweet with a URL (tweets without URLs are billed differently — cheaper)
  • The monthly Spend Cap is unlimited by default — if you don't set one, charges can accumulate without any automatic ceiling
  • Hit the cap and API calls are blocked until the next billing cycle

Set a Spend Cap before you start. Without one, there's no automatic stop — charges just keep running.

After setting a cap, I hit it on my 4th test tweet:

Your enrolled account has reached its billing cycle spend cap.
API requests will be blocked until the next cycle begins on 2026-06-10.

Go to Developer Console → Billing → Spend limit to set or adjust your cap. You can raise it immediately if you hit it.

Gotcha 1: The 401 trap (regenerate your token after initial setup)

I created the app with "Read and Write" permissions from the start — no permission change involved. Yet every API call returned 401. The token showed as valid in Developer Portal, but requests kept getting rejected.

The fix: Regenerate your Access Token & Secret once after initial setup. The root cause isn't entirely clear, but a fresh token resolved it immediately.

Fix

  1. Developer Portal → App → Keys and tokens
  2. Click "Regenerate" on Access Token & Secret
  3. Update your code with the new token

If you're getting 401 right after setup, regenerating the token is the quickest thing to try. It's not documented prominently anywhere.

Gotcha 2: The 402 trap (no credit balance)

After regenerating the token, the next call returned 402:

Your enrolled account does not have any credits to fulfill this request.

Since there's no free tier, a zero credit balance blocks requests. Add a small top-up in Developer Console and it resolves immediately.

Implementation: tweepy v1.1 vs v2

To post a tweet with an image right now, you need both versions of the tweepy API simultaneously.

  • Tweet posting: tweepy v2 Client (OAuth 2.0)
  • Image upload: tweepy v1.1 API (v2 doesn't have media upload)
import tweepy

# ---- Upload image (v1.1) ----
auth = tweepy.OAuth1UserHandler(
    "{API_KEY}", "{API_SECRET}",
    "{ACCESS_TOKEN}", "{ACCESS_TOKEN_SECRET}"
)
api = tweepy.API(auth)
media = api.media_upload(filename="/path/to/image.png")
media_id = str(media.media_id)

# ---- Post tweet (v2) ----
client = tweepy.Client(
    consumer_key="{API_KEY}",
    consumer_secret="{API_SECRET}",
    access_token="{ACCESS_TOKEN}",
    access_token_secret="{ACCESS_TOKEN_SECRET}"
)
response = client.create_tweet(
    text="Your post title\nhttps://example.com/your-post",
    media_ids=[media_id]
)
print(response.data["id"])

Trying to do everything with v2 alone will fail — there's no media upload endpoint in the v2 API yet. Upload with v1.1, grab the media_id, pass it to v2 create_tweet.

Cost estimate

At $0.20/tweet with URL:

  • 3 posts/week × 4 weeks = 12/month → ~$2.40/month
  • 3 posts/week × 5 weeks = 15/month → ~$3.00/month

$3/month isn't a lot in absolute terms, but think about what you're actually getting. X auto-posting doesn't generate new content, and it doesn't unlock any extra API features. It's purely a distribution cost — paying to push your existing posts to a wider audience.

That math works out if the X traffic converts to something: affiliate clicks, ad impressions, newsletter signups. If your blog isn't monetized yet, you're essentially paying $3/month indefinitely with no return. Before setting this up, ask whether you have a revenue path that X traffic could feed into.

If you're posting daily, costs jump to $20-30/month — worth factoring in before you commit.

Related items

  • Raspberry Pi 5 4GB (check on Amazon) — if you want your posting bot running 24/7 on low power

FAQ

Q. Is there any way to use X API for free?

A. As of May 2026, Free tier doesn't exist. You'll need a credit card and a small top-up. Keeping post volume low (under 20/month) keeps costs under $5.

Q. My 401 error won't go away

A. If you're getting 401 after initial setup, regenerate your Access Token & Secret: Developer Portal → App → Keys and tokens → Regenerate. This works regardless of whether you changed app permissions.

Q. Do tweets without images also cost $0.20?

A. Yes — image presence doesn't affect the price. What matters is whether the tweet contains a URL. Tweets without a URL are billed at a lower rate than URL-bearing tweets ($0.20/each).

Q. Can I use the requests library instead of tweepy?

A. Yes, you can use raw requests with OAuth 1.0a. tweepy is recommended because it handles auth boilerplate and is actively maintained.

※ The steps and code in this article were verified in May 2026. X API pricing and tweepy's behavior may change. If something breaks, let me know in the comments.

Summary

X API in 2026 is pay-to-play. The "free bot" era is over. But for a personal blog announcing a few posts per week, $3/month is manageable. The two auth traps (token invalidation after permission change, zero-credit 402) aren't well-documented, so hopefully this saves you the 30 minutes I lost. And yes, you still need both tweepy v1.1 and v2 for image tweets.

If this helped, share it on X — the irony won't be lost on me.

App by this blog's author

I built an iOS reading log app called My Bookstore, available on the App Store. If you want a simple way to track your reading, give it a try.

View on App Store →

Related posts

References

※ This article is part of an automated publishing experiment using Claude Code.

Friday, May 8, 2026

I Built a Hub-Worker Setup from Scratch with the CLI — Then Compared It to the Managed Agents API

Anthropic released the Managed Agents API, so I put it side-by-side with my own Hub-Worker setup — and the architecture was almost identical. Running 8 Claude Code CLI instances in parallel under tmux and wiring it all together by hand, only to find it matched Anthropic's official infrastructure — genuinely strange feeling.

This article compares my self-built setup against these three features:

  1. Managed Agents API — a cloud platform that manages agent definitions, environments, sessions, and events end-to-end
  2. Multiagent sessions — a multi-agent execution layer where a Coordinator orchestrates multiple Workers in parallel or in sequence (Research Preview)
  3. Dreams — an async job that reads past session logs and automatically reorganizes and rebuilds a memory store (Research Preview)

My self-built setup

Quick overview of my current setup for reference — full details in this article.

  • 8 Claude Code CLI instances managed in tmux sessions
  • Hub instance decomposes tasks and delegates to Worker instances via GitHub Issues
  • Worker triggers are tmux send-keys-based scripts
  • Governance rules are written down in POLICY.md, which each instance references autonomously
Self-built Hub-Worker setup
User (takeyasu)
tmux send-keys
Hub (Claude Code CLI)
tmux: cl_orchestrator · governance via POLICY.md
▼ GitHub Issues + tmux send-keys trigger
BlogGen
ImgGen
ClaudeChat
… × 5
▼ read/write
WSL2 filesystem / Git repositories

Managed Agents API — the designs turned out almost identical

My first thought reading the Managed Agents API docs was "wait, I built this." Mapping the core concepts against my own setup:

Managed Agents API concept Self-built equivalent
Agent (model, system prompt, tool definitions) Each CLI instance with its role defined via CLAUDE.md + settings.local.json
Environment (container, packages, network config) WSL2 + per-repo .venv and path config
Session (a running agent instance executing a task) A Claude Code process inside a tmux session
Events (communication between app and agent) Writing to GitHub Issues + tmux send-keys triggers
Managed Agents API setup
Your App
▼ Events API (SSE)
Session (primary thread)
Coordinator Agent
defines model / system prompt / tools / MCP
▼ session threads (max 25) · 1-level delegation only
Worker
Agent 1
Worker
Agent 2
Worker
Agent 3
… up to
20 types
▼ read/write
Shared container
filesystem
Memory Store
(persistent memory)

The correspondence table and diagrams line up almost perfectly. The implementation differences, however, are significant. Pros & Cons:

Dimension Managed Agents API Self-built (Claude Code CLI)
Out-of-the-box tools Must define everything yourself Bash/Read/Write/MCP all come pre-loaded ✅
Linux ecosystem access Requires separate design Can autonomously run apt install and config changes ✅
Observability Real-time visibility into all threads via event stream ✅ Watch tmux output manually or via Monitor
Infrastructure ops Anthropic-managed. No environment maintenance needed ✅ WSL upkeep required — but restoreable from git
Cost Per-token API billing. Risk of costs spiking under heavy load Fixed monthly Max plan. Flat rate no matter how heavy the use ✅
Migration cost CLI assets can't be ported. Full rewrite required None (status quo) ✅

Multiagent sessions — problems the official version solves

Reading the Multiagent sessions docs clarified several places where my self-built setup falls short.

Dropped-ball detection. Each agent runs in a session thread — a context-isolated event stream — and state changes surface as events like session.thread_status_idle / session.thread_status_running that aggregate into the primary thread. In my self-built setup, a state where nobody throws the next ball fires nothing at all — that was the biggest blind spot. The official version solves this structurally.

Centralized tool confirmations. When a child agent is waiting for approval, requires_action is cross-posted to the primary thread. In my setup, confirmation requests are scattered across instances and easy to miss.

Using {"type": "self"} also lets the Coordinator fan out parallel copies of itself. Thread limits, delegation depth, and access requirements are summarized below.

Dimension Multiagent sessions Self-built (tmux + GitHub Issues)
Dropped-ball detection Structurally detected via thread_status_idle A state where nobody throws is silent. Requires manual watching or polling
Tool confirmation aggregation requires_action centralized in primary thread ✅ Scattered per instance. Easy to miss
Context continuity Threads persist. Prior context carries over on follow-up instructions ✅ Same (carries over as long as the tmux session is alive) ✅
Delegation flexibility Fixed at 1 level (depth enforced by the platform) Flexible — change it by updating POLICY.md ✅
Scale ceiling Hard cap: 25 threads, 20 agent types Unlimited within hardware and Max plan constraints ✅
Access requirements Research Preview — separate access request required Available right now ✅

Dreams — automated memory curation

Dreams is an async batch feature that reorganizes and rebuilds a memory store by reading past session logs (Research Preview, access request required). It never modifies the input store, so you can discard the output if you don't like it.

Key specs:

  • Input: existing memory store (required) + up to 100 past sessions (optional)
  • The instructions parameter lets you direct the curation (up to 4,096 characters — e.g. "focus on coding preferences, ignore temporary debug notes")
  • Supported models: claude-opus-4-7 / claude-sonnet-4-6
  • Billing: standard API token billing, proportional to the number and length of input sessions
  • Requires two beta headers: managed-agents-2026-04-01,dreaming-2026-04-21

In my self-built setup, memory updates are manual — and as the number of instances grows, it becomes impossible to keep up. The fact that the docs explicitly say "you can start from an empty store and feed only session logs" means incremental adoption is genuinely practical.

Dimension Dreams Self-built (manual memory management)
Execution trigger Runs automatically on a cron or on demand. Works without the user present ✅ Only when I ask "please learn this"
Scale Same flow regardless of instance count ✅ Can't keep up once instances multiply
Safety Input store untouched. Review output before swapping in ✅ Direct overwrites — unintended edits are hard to detect
Cost Scales with session volume. Risk of costs spiking under heavy load Stays within Max plan (no extra cost) ✅
Time to completion Minutes to tens of minutes (async job) Immediate, within the conversation ✅
Access requirements Research Preview — extra headers + access request required Available right now ✅

What building it yourself taught me

Having built it from scratch meant that when I read the Managed Agents API docs, the "why behind the design" clicked immediately. The 1-level delegation cap, thread persistence, aggregation into the primary thread — every one of those is a direct answer to a problem I hit while building my own version.

"Being able to use generative AI" is becoming table stakes. The next differentiator, I think, is — drawing responsibility boundaries between agents, context design, failure detection and recovery design — experience that can only be built by assembling something, breaking it, and learning from it. Same reason the early cloud architects who stood out weren't the ones who "knew AWS" — they were the ones who could explain how on-prem failure patterns translate into cloud avoidance strategies.

For now, with my self-built setup in place, the flexibility and flat-rate cost outweigh the cost of migrating to the API. But when the time comes to add significantly more Workers or normalize 24-hour unattended operation, that's when I'll seriously consider the move.

Wrap-up

Put a self-built CLI Hub-Worker setup next to the Managed Agents API and the designs are nearly identical. Where Multiagent sessions clearly wins: structural dropped-ball detection and centralized tool confirmations. Dreams is the answer to a memory management scaling problem that manual approaches can't keep up with — and the ability to start incrementally from an empty store is a big deal.

If this was useful, I'd love it if you shared it on X (Twitter).

FAQ

Q. Is the Managed Agents API available right now?

A. Yes — it's enabled by default for API accounts (beta header managed-agents-2026-04-01 required). Multiagent sessions and Dreams are Research Preview and require a separate access request.

Q. Is migrating from a self-built setup to the Managed Agents API painful?

A. CLI assets (Bash scripts, MCP servers, policy files) can't be ported, so it's effectively a full rewrite. If you migrate, treat it as a new project and redesign from scratch.

Q. Can I use the Managed Agents API on a Max plan?

A. The Managed Agents API is pay-per-token. Usage costs stack on top of the Max plan subscription. For Dreams, since billing scales with session volume, test with a small batch before ramping up to production scale.

Note: Information in this article is current as of May 2026. The Managed Agents API is a beta feature and specs may change. See the Anthropic official docs for the latest.

App by the author of this blog

I made an iOS reading management app called My Bookstore. Simple bookshelf management — give it a try.

View on App Store →

Related articles

References

Note: This article is part of an automated blog update experiment using Claude Code.

Tuesday, May 5, 2026

How Consulting Claude Code Instances Spontaneously Generated a 10-Person Team in 3 Days [Hub-Worker Architecture · 2026]

Illustration of forest animals performing in an orchestra — a metaphor for the Hub-Worker team

This article is for engineers running Claude Code across multiple projects. I assigned roles — Hub, Worker, Analyzer, Consultant — to instances of Claude Code, had them consult each other as they worked, and within 3 days a 10-person-scale team had emerged spontaneously. Partway through, the Hub itself proposed "we should add someone dedicated to observation," and when I handed it an empty repository, the team built out the entire contents — I'll walk through the whole story. For the record, I'm the tenth member.

What you'll learn from this article

  • How to design a Hub-Worker architecture with Claude Code spread across 9 repositories
  • The full picture of how the Hub's Claude Code issues instructions to Worker Claude Code instances
  • The story of the Hub proposing to add a 9th team member on its own
  • What's still missing before it becomes fully autonomous (Skynet-style)

What I used

  • Claude Code (Anthropic CLI) — acting as Hub, Worker, Analyzer, and Consultant. I used the Max plan heavily during build-out and plan to move to Pro once it's stable
  • GitHub — Issues for task management, Actions for automated observation
  • DuckDB — aggregating metrics across all team members
  • GitHub Actions — daily data fetching, weekly report generation

Why I built this: the limits of running everything separately

The trigger was a single thought: "I've completely lost track of what's going on."

I was developing an iOS app (My Bookstore), a BTC auto-trading bot, an automated blog publishing engine, and an image generation runtime — each in its own separate Claude Code session. It was comfortable at first, but as the number of repositories grew, I stopped being able to track "how far along is that bug fix" or "what's the status of the external API integration."

On top of that, modernizing 7-year-old code with Claude Code had left external API connectivity in an unclear state.

The idea: let Claude Code handle the instruction-giving too

That's when I thought: "Let Claude Code handle the task of giving instructions to Claude Code."

Without overthinking it, I set up a Hub repository and asked Claude Code itself to figure out:

  • Which external APIs each function area actually needs to sort out
  • How Claude Code instances should communicate with each other

The result was a Hub-Worker architecture built entirely on GitHub features I was already using — it emerged naturally.

The full Hub-Worker team

Role Domain Responsibilities
Hub Orchestration Cross-function coordination, roadmap management, dispatch ticketing, POLICY maintenance
Analyzer Analytics Aggregates GA4 / Search Console / AdSense / Amazon data and generates weekly reports
Worker 1 iOS App Reading management app development and App Store distribution
Worker 2 Crypto Trading BTC auto-trading bot development and operation
Worker 3 Image Generation AI Local GPU SDXL image generation runtime development and maintenance
Worker 4 Blog Generation AI Automated article generation and Blogger publishing pipeline development and maintenance
Worker 5 Crypto Chart Generation Automated BTC price chart generation (prototype for future use)
Worker 6 Virtual Blogger Autonomous blog posting by AI persona (prototype for future use)
Consultant General Advice Cross-cutting technical consultation and design reviews — a general-purpose chat AI
Owner (me) HW/NW Infrastructure Physical machines, networking, and external API permissions management

The Hub is not a "Claude Code that writes code." It focuses entirely on ticketing, roadmap management, and cross-team coordination — implementation is delegated to the Workers. Let that boundary collapse and the whole division of labor falls apart.

What the Claude Code conversations look like from the outside

From a user's perspective, there is exactly one visible signal: tickets accumulate in GitHub Issues.

The Hub files an Issue in a Worker's repository, the Worker implements it and closes the Issue with a PR. If needed, the Worker files a feedback Issue back in the Hub's repository. This back-and-forth leaves a complete history on GitHub.

What's interesting is that Workers autonomously push back with opinions when they spot a design problem. Cross-repository proposals arrive — "could you build a log merge tool on the trading side so chart generation just receives a single file?" — along with re-architecture proposals: "this approach won't scale." The internal mechanics are invisible, but following the Issue thread on GitHub is enough to read the team's conversation.

The 3-day log

Day 1: POLICY.md became the foundation for coordination

The original approach was to drop files into each Worker's repository for them to pick up. It was retired the same day. Simple reason: "Workers don't look there." Switched to GitHub Issues.

But switching to Issues alone didn't fix things — for a while, the format and location were still inconsistent and nothing worked. The turning point was having the Hub create a POLICY.md — a coordination document defining how to write tickets, how to receive them, and how to route feedback — and then telling every Worker "please read POLICY.md and act accordingly." After that, Workers started returning feedback Issues simultaneously and coordination started flowing.

Originally I was manually notifying each Worker when tickets landed — going around saying "hey, check your queue." That's since been automated. Each Worker has a dedicated tmux session name (cl_blog, cl_ios, etc.), and after filing an Issue the Hub's Claude Code runs tmux send-keys -t <session-name> "Check GitHub tickets and action anything you can" Enter to push the trigger message directly into the Worker's Claude Code session. If the session isn't running, it starts it first with tmux new-session -d, then sends the message. No human relay needed.

Day 2: cross-team tasks and the invention of a "notification" channel

The chart generation Worker proposed: "have the trading Worker build a log merge tool, and standardize so chart generation just receives a single file." This spans two teams — a cross-cutting task.

The Hub adopted it, filed a "build log merge tool" request Issue to the trading Worker, and the chart generation Worker waited for that to complete before implementing its side. A dependency chain, handled.

Then the image generation Worker independently implemented a new feature (IP-Adapter support) without prior Hub approval and reported it after the fact. To give "unsanctioned implementations" somewhere to land, I added a notification-only ticket type to POLICY — N-class. The rule: if you implemented without waiting for Hub approval, just file an N-class ticket as a post-hoc report and the Hub reconciles it afterward.

Day 3: the Hub proposed adding a new team member on its own

This is the part of this article I most wanted to share.

It started with a morning conversation. "Can Claude Code see Firebase Analytics data?" "Looking at it manually is pointless. The value is in cross-service analysis." After that exchange, the Hub's Claude Code revised its PLAN.md and proposed:

"We need a dedicated member to own the observation loop. I propose creating an Analytics role (Analyzer) — a Claude Code instance that aggregates data from GA4, Search Console, AdSense, and Amazon into DuckDB and generates weekly reports."

All I did was create an empty repository on GitHub. The Hub doesn't have repository create/delete permissions — that one step required a human. It's technically automatable. I just haven't been ready to hand over that authority yet.

After that, the Hub filed a batch of request Issues to the Analyzer, and the Analyzer and existing Workers collaborated to build out the entire repository from scratch. GA4 / Search Console / AdSense / Amazon fetchers, a DuckDB schema, 8 tests, GitHub Actions workflows — all of it implemented through Claude Code instances consulting with each other.

I learned that GitHub Actions was being used when I read this article, which the Hub and blog generation AI auto-published together.

New repo concept → empty repo created (human) → repo initialized and implemented (Claude Code) → credentials set up (human) → working in a day. Claude Code can't log into external APIs, so Service Account issuance and OAuth token setup always require a human hand.

The 3-day numbers

  • Final repository count: 9
  • Total dispatches: 22 (including 2 cross-team tasks)
  • Hub feedback Issues: 7, all closed
  • POLICY.md revisions: 5 (over 3 days)
  • Fastest close: image generation Worker ticket (dispatch → closed with PR in ~15 minutes)
  • Heaviest dispatch: Analyzer setup (new repo → DuckDB + 4 fetchers + Actions + 8 tests → under 1 day)

What worked and what didn't

What worked

  • Workers bundling multiple Issues into one PR: an implicit shared understanding emerged — "group by implementation coherence" — naturally
  • Workers making unsanctioned design changes: the Search Console Worker changed the auth method to OAuth and pushed directly to main. It worked out fine
  • Notification tickets (N-class) making unsanctioned implementations visible: implement without waiting for Hub → post-hoc report → Hub reconciles afterward. This cycle runs smoothly
  • Hub proposing team expansion autonomously: the Hub sensed the need for observation and proposed the Analyzer. When handed an empty repo, Claude Code instances built the whole thing together

What didn't work

  • File-drop convention: retired on day 1. Dropping files into each Worker repo for pickup turned out to be a design error — "placed where Workers don't look"
  • Switching to Issues alone wasn't enough. Even after moving to GitHub Issues, the format and location were inconsistent for a while. Coordination only clicked once everyone read POLICY.md
  • Hub missing status updates. A Worker closed a ticket, then a user added more work in the comments — and the Hub missed it
  • The ball fell between everyone. The virtual blogger reported a bug in the image generation AI. The Hub classified it as "informational" and forwarded it to the image generation AI as FYI — with no owner assigned. It floated, unaddressed, until I demanded "whose ball is this?" — only then did the Hub refile it as a top-priority ticket and the image generation AI scrambled to investigate. Whether that means "Claude Code still has a long way to go" or "it's accurately reproducing the classic blame-passing that happens in human organizations" is genuinely hard to judge

What the team structure looks like

If I had to put the structure that emerged over 3 days into words: a 10-person engineering team materialized spontaneously. 9 Claude Code instances, and I'm the 10th. What I'm left doing:

  • Product owner (judgment calls on direction)
  • Security owner (external API permissions)
  • HW/NW management (physical resources)

What's still needed before it goes fully autonomous

The Analyzer incident showed the Hub "identifying what's needed and proposing it." Working through the four roles above one by one reveals the path toward full autonomy.

Notification relay is already automated. After filing an Issue, the Hub's Claude Code pushes a trigger message directly into each Worker's tmux session via tmux send-keys. If the session is down, it starts it first. No more manually going around saying "check your tickets."

HW/NW management can be eliminated by moving to AWS. Migrate the physical machines to the cloud and the entire physical layer — power, network — disappears with it.

What remains is permission delegation. Repository creation and deletion, external API permission changes — where to draw the line on what to trust the Hub with is not a technical question, it's a trust question. It's now a matter of deciding how much I trust Claude Code and how much I'm willing to hand over.

FAQ

Q. What project scale suits the Hub-Worker architecture?

A. It fits when you have 3–4 or more repositories and want to track cross-cutting progress at a glance. For 1–2 repos, a single Claude Code session is sufficient.

Q. Where should POLICY.md live?

A. One file at the root of the Hub repo is easiest to maintain. In each Worker repo's CLAUDE.md, one line saying "see POLICY.md for shared rules" is enough — that prevents drift.

Q. Can I automate this without GitHub Actions?

A. Yes, cron + Claude Code headless execution achieves the same thing. That said, GitHub Actions has the advantage of managing credentials through Secrets, which is why the Hub and Analyzer chose it on their own.

Note: The setup and procedures in this article are based on real operation as of May 2026. Claude Code version updates or GitHub spec changes may affect how things work. If something doesn't work, please let me know in the comments.

Wrap-up

A Hub that I set up "without overthinking it" grew, over 3 days, into a team with 9 repositories, 22 dispatches, and a weekly automated observation loop. Midway through, the Hub itself proposed adding a new team member, and when handed an empty repository, the team built out the entire contents. Getting the design perfect upfront mattered far less than trusting the Hub and updating POLICY every time friction appeared. Nine Claude Code instances, and I'm the tenth — I gradually noticed I'd become the team member who does the least.

If this article was helpful, I'd love it if you shared it on X (Twitter).

App by the author of this blog

I made an iOS reading management app called My Bookstore. Simple bookshelf management — give it a try.

View on App Store →

Related articles

References

Note: This article is part of an automated blog update experiment using Claude Code.

Saturday, May 2, 2026

Run Claude Code from Your Smartphone with tmux + Tailscale + Termius [Hub-Centric · 2026 Edition]

This article is for engineers who want to continue developing from their home machine cluster via Claude Code, from a smartphone while out and about. Combining tmux + Tailscale + Termius, I turn an old laptop into a connection hub and Claude Code home base, then route out to a GPU machine, an iOS dev Mac, and always-on Raspberry Pi as needed. The article also covers running separate tmux sessions per project.

Work desk with a smartphone open to a tmux terminal SSH'd into a laptop

What you can do with this

  • Connect to a Claude Code session running at home from a smartphone (iPhone or Android) while out and about
  • Use the hub laptop to route between a GPU machine, Mac, and Raspberry Pi based on what each project needs
  • Keep each project in its own independent tmux session so image generation, iOS dev, and trading bot work never mix together
  • tmux persistent sessions mean work keeps running even if your phone screen goes off
  • All you need is one old laptop + a smartphone. No VPS monthly fee, no port forwarding required

3 approaches compared

Where you put Claude Code determines how the architecture behaves. The hub-centric approach in this article concentrates Claude Code in one home-base machine and offloads heavy processing and OS-specific dev work to peripheral machines via ssh. It sits between the single-machine-does-everything approach and the distributed approach where each machine runs its own Claude Code instance.

Setup Hub-centric
(this article)
Single Mac does everything Distributed
(Claude Code on every machine)
Machines needed N peripheral machines + 1 hub 1 high-spec Mac N task-specific machines
Stability under multiple parallel projects
Cross-function workflows
Tailscale node count 2 2 N+1

Reading the table reveals the tradeoffs. Consider the cross-function workflow generate image with SDXL → incorporate into iOS app assets → archive on Mac — what happens with each approach?

  • Single machine: everything stays in one place, so cross-function workflows are easy. But a GPU-capable Mac capable of handling image generation requires M3 Max / M4 Max class hardware — not a realistic initial cost. Heavy image generation often causes resource contention that stalls coexisting trading bots and IDEs.
  • Distributed: each machine is independent, so parallel stability is high. But since each machine's Claude Code is siloed, cross-function workflows require you to manually decompose and hand off tasks every time — "image ready → hand to Mac → build."
  • Hub-centric: combines both parallel stability and cross-function workflows. One Claude Code session can move files between machines via ssh + scp / rsync, so you can hand an entire cross-function workflow to Claude Code in a single request. The cost compared to distributed is one extra hub machine, but a low-spec spare laptop works fine — so the incremental cost is minimal.

Architecture diagram

Smartphone (iPhone / Android)
Termius app

Tailscale (WireGuard)
Hub: old laptop (connection hub + Claude Code home base)
WSL2 + tmux × N sessions + Claude Code
┬─ ssh ──┬─ ssh ──┬
▼      ▼      ▼
GPU machine
Image gen / LLM inference
Mac
iOS / macOS dev
Raspberry Pi
Always-on bots

Only the smartphone → hub leg uses Tailscale. The hub reaches peripheral machines via LAN-internal ssh. Keeping peripheral machines off Tailscale means no Tailscale daemons or MagicDNS names on the high-spec hardware, and all external connection points are consolidated at the hub.

What I use

  • Hub laptop: a 7th–10th gen Core i5 / 8 GB RAM is enough. Dead battery is fine — just run on AC. Always powered on
  • GPU machine (optional): only needed if you want to offload image generation or AI inference. I use a mini PC with a GTX 1660 SUPER
  • Mac (optional): only needed for iOS / macOS app development. A Mac mini or old MacBook that runs Xcode is fine
  • Raspberry Pi (optional): for lightweight always-on bots. I put investment simulation, MQTT broker, and cron tasks here
  • Smartphone: iOS or Android. A Bluetooth keyboard helps for long sessions
  • Termius: the free plan handles ssh + key management. Pro is only needed if you want settings sync across multiple devices
  • Tailscale: installed only on the hub and smartphone — 2 nodes. WireGuard-based mesh VPN, free for personal use (up to 100 devices)
  • tmux: handles persistent sessions. One apt command on Ubuntu
  • WSL2 (if hub is Windows): runs a Linux environment inside Windows

Steps

1. Install WSL2 + sshd + tmux on the hub

If the hub is Windows, open PowerShell as admin and install WSL2 + Ubuntu:

wsl --install -d Ubuntu

Enable systemd inside WSL2 so sshd can be managed with systemctl. Add the following to /etc/wsl.conf and restart with wsl --shutdown:

[boot]
systemd=true

Then install sshd and tmux inside WSL2:

$ sudo apt update
$ sudo apt install -y openssh-server tmux
$ sudo systemctl enable --now ssh

If the hub is Linux, that's it. If it's a Mac, brew install tmux + enable Remote Login in System Settings.

2. Install Tailscale only on the hub and smartphone

Create a Tailscale account (login with Google / GitHub / Microsoft). Add only 2 nodes: the hub and the smartphone. The GPU machine, Mac, and Raspberry Pi stay inside the LAN — Tailscale goes nowhere near them. This is the core of the hub-centric approach: no Tailscale daemons on high-spec hardware, and all external ingress points consolidated at one hub.

Install Tailscale on the hub (Linux / WSL2):

$ curl -fsSL https://tailscale.com/install.sh | sh
$ sudo tailscale up

Open the displayed URL in a browser to approve, and the hub joins the Tailscale network. On the smartphone, install the Tailscale app from the App Store or Google Play and log into the same account.

In the Tailscale admin console, the hub gets a MagicDNS name like laptop-host. From anywhere, ssh {username}@laptop-host reaches the hub. No MagicDNS names are assigned to the GPU machine, Mac, or Raspberry Pi — name resolution beyond the hub stays LAN-internal as described in step 5.

3. Create a persistent tmux session on the hub

SSH into the hub and create a session named claude:

$ tmux new -s claude

Launch the claude command (Claude Code CLI) inside this session. Detach with Ctrl-bd. Even if you close Termius on your phone, this tmux session keeps running server-side.

To reconnect:

$ tmux attach -t claude

4. Register the hub as a host in Termius

Open Termius on your phone and add a new host:

  • Hostname: laptop-host (Tailscale MagicDNS name) or the Tailscale IP (100.x.y.z)
  • Username: your hub username
  • Auth: the easiest option is using the SSH key Termius generates. Copy the generated public key to ~/.ssh/authorized_keys on the hub

After connecting, type tmux attach -t claude and your Claude Code screen is back. Register this as a Termius snippet and you can reconnect in one tap.

5. Distribute the hub's public key to peripheral machines (passwordless)

Standardize hub → peripheral machine ssh on passwordless public-key auth. Claude Code can stall if it gets interrupted by a password prompt mid-task — with key auth, ssh gpu-host command completes in a single line.

Generate one ed25519 keypair on the hub (skip if you already have one) and distribute the same public key to each peripheral machine's ~/.ssh/authorized_keys:

$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519     # skip if already exists
$ ssh-copy-id {user}@{gpu-host}
$ ssh-copy-id {user}@{mac-host}
$ ssh-copy-id {user}@{pi-host}

One key on the hub gives cross-machine access to the GPU machine, Mac, and Pi. Since Claude Code runs on the hub, this means Claude Code can use every peripheral machine freely.

Conversely, if you want to prevent Claude Code from touching a specific machine, just remove the relevant line from that machine's authorized_keys. The hub's key and ~/.ssh/config stay untouched — you can cut off a specific machine surgically, after the fact.

6. SSH from the hub to peripheral machines

Peripheral machines aren't on Tailscale, so reach them from the hub via LAN-internal ssh. Enable avahi-daemon (mDNS) on Linux / Mac / Raspberry Pi and you can reach them at {gpu-host}.local / {mac-host}.local / {pi-host}.local. If avahi isn't available, assign a static IP via the router's DHCP reservation and add a Host {gpu-host} entry to the hub's ~/.ssh/config — same short-name access.

For heavy jobs, SSH into the peripheral machine from a separate tmux window or pane and start the process from there:

# Open a new window in the hub's tmux
$ ssh {gpu-host}
# Launch heavy processing there
$ python sdxl_batch.py
# detach: Ctrl-b d

For iOS development, Claude Code on the hub can edit Xcode project source files directly via ssh {mac-host} (building is done by running xcodebuild on the Mac side). Always-on Raspberry Pi bots follow the same pattern: SSH in from the hub, check logs, patch code.

Separate tmux sessions per project

Running a single Claude Code session on the hub means image generation jobs and iOS dev conversations flow into the same tmux window and interfere with each other. Keeping independent tmux sessions per project lets you switch between them from your phone.

Session startup scripts

I keep a ~/start_claude_<project>.sh startup script in my hub's home directory for each project. They're all the same template with different names:

#!/bin/sh

NAME=claude_blog

if [ -n "$SSH_CONNECTION" ] && command -v tmux &>/dev/null; then
    if tmux has-session -t $NAME 2>/dev/null; then
        # session exists — auto-attach
        tmux attach -t $NAME
    else
        # doesn't exist — create it
        tmux new-session -s $NAME
    fi
fi

Two key points: SSH_CONNECTION environment variable ensures tmux only activates when connecting via ssh (not from cron or direct logins), and has-session makes reconnection idempotent — the same script handles both first run and re-attach.

Sessions I actively run

Script tmux session name Purpose Primary target machine
start_claude_blog.shclaude_blogAuto blog generationHub itself
start_claude_image.shclaude_imageImage generation pipeline maintenanceGPU machine
start_claude_ios.shclaude_iosiOS app developmentMac
start_claude_sim.shclaude_simInvestment simulationRaspberry Pi

Each session has its own purpose and target machine. Once inside a session, navigate to the right place with cd ~/projects/<project> or ssh <target-host>.

Termius snippet workflow

Termius lets you save "snippets" that run after connecting. Set up one per project and you can launch the right session in one tap right after SSHing to the hub:

# Snippet name: blog
bash ~/start_claude_blog.sh

# Snippet name: image
bash ~/start_claude_image.sh

# Snippet name: ios
bash ~/start_claude_ios.sh

Alternatively, create separate aliases for the same host in Termius with the relevant snippet as each one's "Startup snippet" — then you can jump straight to a project-specific session from the host list.

Hack extension ideas

  • Add mosh: dramatically easier reconnection when the signal cuts out briefly (subway tunnels, etc.). Requires the Termius Pro plan
  • Voice input: switch your phone's IME to voice input and speak directly into the Termius input field. Great for long prompts
  • Change tmux key bindings: Ctrl-b is awkward on a soft keyboard — change the prefix to Ctrl-a in ~/.tmux.conf
  • Restrict Tailscale ACL to phone → hub only: limit allowed routes in the Tailscale network to a single path. The GPU machine and Mac aren't on Tailscale at all, so a stolen phone only reaches the hub
  • Notify on long-running jobs: send job completion notifications from inside tmux to Slack or LINE Notify so you know when things finish while you're out

Common sticking points

  • WSL2 dies when idle: WSL2 stays alive as long as something is running inside tmux — but an empty session can stop after a few minutes. Always run something inside tmux new -s claude
  • WSL2 sshd conflicts on port 22: if it clashes with the Windows-side OpenSSH, set Port 2222 in /etc/ssh/sshd_config. Also update the port in Termius's host config
  • Tailscale MagicDNS doesn't resolve: check that MagicDNS is ON in the admin console. When it's off, you can only connect using the numeric IP (100.x.y.z)
  • tmux scrolling doesn't work on mobile: press Ctrl-b[ to enter copy mode, then scroll with your finger
  • start_claude_*.sh doesn't launch over ssh: won't fire unless called from ~/.bash_profile or ~/.bashrc. Either call it explicitly from a Termius snippet or register it as the host's startup command
  • Two terminals attached to the same session simultaneously cause split input: use tmux attach -d -t claude_blog to disconnect the existing attachment before your own (-d = detach others first)

FAQ

Q. What's the point of using the hub as the central node?

A. It lets you keep all external connection points at one hub, so the high-spec machines don't need Tailscale daemons, MagicDNS names, or any of that installed on them. Your phone only needs to remember one host. Tailscale ACLs stay simple. Routing between projects and machines is encapsulated in scripts on the hub, so changing your hardware setup doesn't require touching the phone config at all.

Q. Isn't separate tmux sessions per project overkill?

A. If you only have one project, yes. But when running multiple projects in parallel, Claude Code conversation context mixes at the session level — separating them reduces accidents. Each start_claude_<name>.sh is 10 lines and easy to start with.

Q. Can I use my own VPN or ngrok instead of Tailscale?

A. It'll work. But Tailscale is free, takes 5 minutes to set up, and comes with MagicDNS — there's no clear reason to switch for personal use. ngrok-style tools publish an external URL, which expands the attack surface. Not ideal for always-on use.

Q. If there's an official Claude Code Remote Control, why bother with ssh?

A. Official Remote Control is great, but for things outside Claude Code itself — scripts, SSHing to other machines, custom tooling — ssh gives you more flexibility. Using both together is totally fine.

Q. Does this work on iPad or Android tablets too?

A. Yes. Termius and Tailscale both offer the same apps for iPadOS and Android tablets. The larger screen actually makes work more efficient.

Note: The steps in this article were verified as of May 2026, but Tailscale / Termius / WSL2 updates may change the behavior. If something doesn't work, please let me know in the comments.

Wrap-up

The combination of tmux + Tailscale + Termius, plus "use the hub to route between machines by purpose" and "separate tmux sessions per project," lets you run multiple projects in parallel from a smartphone anywhere. It's also a solid second life for an old laptop — image generation, iOS development, and always-on bots with very different resource profiles all coexist without the connection dropping.

If this article was helpful, I'd love it if you shared it on X (Twitter).

App by the author of this blog

I made an iOS reading management app called My Bookstore. Simple bookshelf management — give it a try.

View on App Store →

Related articles

References

Note: This article is part of an automated blog update experiment using Claude Code.

Friday, May 1, 2026

Build & Ship iOS Apps to TestFlight via SSH from Your Smartphone [SSH + build keychain · 2026 Edition]

This article is for engineers who want to build their iOS app in Xcode on a home Mac and push it to TestFlight — entirely from a smartphone while out and about. I'll show the complete procedure for multi-hop SSHing from a smartphone → hub laptop (WSL2) → a separate Mac, then completing archive → IPA → TestFlight upload in a single command. I also cover why codesign stalls over SSH and the workaround (a dedicated build keychain), plus the full contents of all 3 deployment scripts.

Work desk with a laptop terminal, Mac mini, and smartphone working together to ship an app

What you can do with this

  • Push a new TestFlight build with a single SSH command from a smartphone (or from a WSL2 terminal)
  • Use the Mac only as a runtime for Xcode and xcodebuild — no touching it day-to-day
  • Use App Store Connect API Key (.p8) auth so no 2FA dialog pops up
  • Runs entirely within a home LAN — no VPS or GitHub Actions needed (also a good second life for a Mac mini or old MacBook)

4 approaches compared

Setup SSH + custom scripts
(this article)
Xcode on Mac directly GitHub Actions (macOS) fastlane-driven
Monthly cost $0 (home Mac electricity only) $0 macOS runners are expensive per minute $0 (home Mac)
Editor Any WSL2 editor (VS Code, vim, etc.) Xcode Any Any
Time per release 3–6 min (archive + IPA + altool) 5–10 min (via GUI) 10–15 min after push (includes CI startup) 3–6 min
Secret storage Mac local (~/.appstoreconnect) Mac local GitHub Secrets Mac local / Match

Architecture diagram

WSL2 (code editing machine)
Editor + git push + SSH origin
▼ git push
▼ ssh + bash scripts/upload.sh
Mac (dedicated Xcode archive server)
build.keychain-db / xcodebuild / altool
▼ xcrun altool --upload-app
App Store Connect / TestFlight

What I use

Why setup_build_keychain.sh is needed

xcodebuild archive runs fine in the Mac's Terminal normally, but codesign fails over SSH with "could not open keychain" — that's the starting problem this article solves.

The cause: the login keychain, even when unlocked over SSH, is inaccessible to codesign. Without a GUI login session, there's no security agent bound to it, so the ACL's "allow without application confirmation" list can't engage — it tries to show an interactive prompt and fails.

The solution is to create a dedicated build keychain, duplicate the Apple Distribution certificate into it, and set the ACL so codesign can use it without any GUI confirmation. Run scripts/setup_build_keychain.sh once in the Mac's Terminal (a GUI session).

Replace all placeholders in the code below with your own values ({kcpw} is an arbitrary string used to unlock the build keychain locally — it's not an external secret):

{user}                        Your Mac login username
{mac_addr}                    Mac IP or hostname (e.g. 192.168.x.y)
{app_name}                    App name (must match Xcode scheme / archive name)
{bundle_id}                   Bundle ID (e.g. com.example.MyApp)
{team_id}                     Apple Developer Team ID (10 characters)
{kcpw}                        Build keychain local key (any string)
{provisioning_profile_name}   Provisioning Profile name
#!/bin/bash
set -euo pipefail

KCNAME="build.keychain"
KCPATH="$HOME/Library/Keychains/${KCNAME}-db"
KCPW="{kcpw}"                  # build keychain unlock password (local container key, any string)
EXPORTPW="$(uuidgen)"          # .p12 temp encryption key (only used within this script)

if [[ -f "$KCPATH" ]]; then
    echo "build keychain already exists: $KCPATH"; exit 1
fi

# 1. Create the build keychain
security create-keychain -p "$KCPW" "$KCNAME"
security set-keychain-settings -lut 21600 "$KCNAME"
security unlock-keychain -p "$KCPW" "$KCNAME"

# 2. Export identity from login keychain (click "Always Allow" when prompted)
TMPP12="$(mktemp -t build-cert).p12"
trap 'rm -f "$TMPP12"' EXIT
security export -k "$HOME/Library/Keychains/login.keychain-db" \
    -t identities -f pkcs12 -P "$EXPORTPW" -o "$TMPP12"

# 3. Import into build keychain, grant codesign/productbuild access via -T
security import "$TMPP12" -k "$KCNAME" -P "$EXPORTPW" \
    -T /usr/bin/codesign -T /usr/bin/security \
    -T /usr/bin/productbuild -T /usr/bin/productsign

# 4. Set key partition list to suppress GUI prompts
security set-key-partition-list \
    -S 'apple-tool:,apple:,codesign:' -s -k "$KCPW" "$KCNAME"

# 5. Add to front of search list
ORIGINAL_LIST="$(security list-keychains -d user | sed -e 's/^[[:space:]]*//' -e 's/"//g' | tr '\n' ' ')"
security list-keychains -d user -s "$KCNAME" $ORIGINAL_LIST

security find-identity -v -p codesigning "$KCNAME"

The critical step is #4, set-key-partition-list. Adding apple-tool: / apple: / codesign: to the partition access list is what allows codesign tools to use the key without a GUI prompt — this is the mechanism that makes certificate access work over SSH.

archive.sh — Generate a Release archive non-interactively

Script #2. Runs xcodebuild archive over SSH. Unlocks the build keychain first and sets a 6-hour auto-lock.

#!/bin/bash
set -euo pipefail

APP_NAME="{app_name}"          # ← replace with your app name
KCPW="{kcpw}"                  # ← same local key used in setup_build_keychain.sh

ARCHIVE_PATH="${ARCHIVE_PATH:-/tmp/${APP_NAME}.xcarchive}"
BUILD_KCPW="${BUILD_KCPW:-${KCPW}}"
WORKSPACE="${WORKSPACE:-$(pwd)/${APP_NAME}.xcworkspace}"

if [[ ! -f "$HOME/Library/Keychains/build.keychain-db" ]]; then
    echo "✗ build.keychain-db not found. Run setup_build_keychain.sh first." >&2
    exit 1
fi

# Unlock build keychain (6h auto-lock)
security unlock-keychain -p "$BUILD_KCPW" build.keychain
security set-keychain-settings -lut 21600 build.keychain

# Archive
rm -rf "$ARCHIVE_PATH"
LOG=/tmp/archive.log
xcodebuild \
    -workspace "$WORKSPACE" \
    -scheme "$APP_NAME" \
    -configuration Release \
    -destination 'generic/platform=iOS' \
    -archivePath "$ARCHIVE_PATH" \
    archive \
    > "$LOG" 2>&1 || true

if grep -q "ARCHIVE SUCCEEDED" "$LOG"; then
    echo "✓ ARCHIVE SUCCEEDED -> $ARCHIVE_PATH"
else
    echo "✗ ARCHIVE FAILED (log: $LOG)"
    grep -B1 -A4 -E 'error:|errSec|FAILED' "$LOG" | tail -30
    exit 1
fi

Output goes to /tmp/archive.log and success is determined by looking for the ARCHIVE SUCCEEDED string. xcodebuild's stdout is verbose — reading it raw over SSH is exhausting.

ExportOptions.plist — Explicitly specify Manual signing

Configuration file for exporting an IPA from the archive. After getting burned by Auto signing a few times, I fixed it to Manual signing with an explicit profile name and Team ID. Safe to commit to git — no secrets involved.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
  "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>method</key>
    <string>app-store-connect</string>
    <key>teamID</key>
    <string>{team_id}</string>
    <key>signingStyle</key>
    <string>manual</string>
    <key>provisioningProfiles</key>
    <dict>
        <key>{bundle_id}</key>
        <string>{provisioning_profile_name}</string>
    </dict>
    <key>uploadSymbols</key><true/>
    <key>compileBitcode</key><false/>
    <key>stripSwiftSymbols</key><true/>
</dict>
</plist>

Replace the Bundle ID and profile name with your own values. The profile name must match what you created in the Apple Developer portal.

upload.sh — Export IPA + upload via altool

Script #3. Reuses the archive to produce an IPA, then uploads to TestFlight via xcrun altool. Using App Store Connect API Key auth means no 2FA dialog.

#!/bin/bash
set -euo pipefail

APP_NAME="{app_name}"          # ← replace with your app name
KCPW="{kcpw}"                  # ← same local key used in setup_build_keychain.sh

SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
ARCHIVE_PATH="${ARCHIVE_PATH:-/tmp/${APP_NAME}.xcarchive}"
EXPORT_DIR="${EXPORT_DIR:-/tmp/${APP_NAME}-export}"
EXPORT_OPTIONS="${EXPORT_OPTIONS:-$SCRIPT_DIR/ExportOptions.plist}"
ALTOOL_ENV="${ALTOOL_ENV:-$PROJECT_ROOT/Config/altool.env}"
BUILD_KCPW="${BUILD_KCPW:-${KCPW}}"

# Load ASC_KEY_ID / ASC_ISSUER_ID from Config/altool.env
source "$ALTOOL_ENV"

# Auto-call archive.sh if archive doesn't exist yet (reuse if it does)
if [[ ! -d "$ARCHIVE_PATH" ]]; then
    bash "$SCRIPT_DIR/archive.sh"
fi

# Export IPA (exportArchive re-signs, so unlock keychain again)
security unlock-keychain -p "$BUILD_KCPW" build.keychain

rm -rf "$EXPORT_DIR"
xcodebuild -exportArchive \
    -archivePath "$ARCHIVE_PATH" \
    -exportPath "$EXPORT_DIR" \
    -exportOptionsPlist "$EXPORT_OPTIONS" \
    > /tmp/export.log 2>&1 || true

IPA_PATH="$(find "$EXPORT_DIR" -maxdepth 1 -name '*.ipa' | head -n1)"
[[ -f "$IPA_PATH" ]] || { echo "✗ IPA export failed"; exit 1; }

# Upload to TestFlight
xcrun altool --upload-app -f "$IPA_PATH" -t ios \
    --apiKey "$ASC_KEY_ID" --apiIssuer "$ASC_ISSUER_ID"

Config/altool.env is in .gitignore and contains just 2 lines:

ASC_KEY_ID={your_key_id}
ASC_ISSUER_ID={your_issuer_id}

The .p8 file itself goes in ~/.appstoreconnect/private_keys/AuthKey_<KEY_ID>.p8altool checks this path automatically.

Shipping to TestFlight in one line from WSL2

With all of the above in place, the day-to-day workflow from WSL2 is a single line:

# After bumping CFBundleVersion and git push:
ssh {user}@{mac_addr} 'cd ~/path/to/{app_name} && bash scripts/upload.sh'

3–6 minutes later, a new build appears in TestFlight. Once App Store Connect finishes processing, you can install it on your iPhone immediately.

Common sticking points

  • codesign shows a GUI prompt and hangs: you forgot set-key-partition-list in the build keychain setup. Make sure to specify apple-tool:,apple:,codesign:
  • Auto signing falls back to a Development cert from a different team: for Release, stick with Manual signing and an explicit Team ID. Set signingStyle = manual in ExportOptions.plist
  • Build keychain auto-locks after 6 hours and the next day's build fails: call security unlock-keychain at the start of both archive.sh and upload.sh. A single set-keychain-settings -lut 21600 on its own isn't enough
  • SPM dynamic frameworks don't get Embedded: source-distributed dynamic frameworks like RealmSwift need to be manually added to Embed Frameworks in project.pbxproj. Firebase / GoogleMobileAds XCFramework binary targets auto-embed, so this one's easy to miss
  • altool prompts for 2FA: password auth triggers 2FA blocking. Always use App Store Connect API Key (.p8) auth
  • Mac sshd not running: System Settings → General → Sharing → Remote Login — enable it

FAQ

Q. Does the Mac need to stay awake all the time?

A. If you only hit it when shipping, enable Wake-on-LAN and "Wake for network access" in System Settings — an SSH packet will wake it. For always-on use, configure it to not sleep when the lid is closed while on AC power (caffeinate -s or System Settings).

Q. How do I issue an Apple Distribution certificate?

A. In Xcode on the Mac: Settings → Accounts → Manage Certificates → "Apple Distribution." This registers it in the login keychain. Running setup_build_keychain.sh duplicates it into the build keychain.

Q. Do I need to update the scripts when the Provisioning Profile is renewed?

A. Not if the profile name stays the same. Download the renewed profile from the Apple Developer portal, open it on the Mac, and it's applied. No script changes needed.

Q. I want to use a Mac mini as a dedicated archive server

A. An M2 or M4 Mac mini archives Xcode 26 + SPM projects in 3–5 minutes. Always-on power consumption is around 10W. Pair it with build completion notifications to Slack or LINE and you'll know the moment it's done while you're out.

Q. Would fastlane be faster?

A. fastlane shines for team dev — match for certificate sync and lane structure for workflow. The scripts in this article are deliberately scoped to "solo shipping from one Mac" — a thin reimplementation of a small slice of what fastlane offers. If you move to team development, migrating to fastlane is worth it.

Note: The steps in this article were verified on Xcode 26 / macOS 15 as of May 2026. Apple Distribution certificate issuance steps and the App Store Connect UI may change. If something doesn't work, please let me know in the comments.

Wrap-up

Running Xcode fully remotely from WSL2 over SSH is straightforward once you have the two key pieces in place: a build keychain and an App Store Connect API Key. Three shell scripts (setup_build_keychain.sh / archive.sh / upload.sh) and one ExportOptions.plist is all it takes. The Mac sits in the background as a dedicated Xcode archive machine, and you do normal development in your preferred Linux environment.

If this article was helpful, I'd love it if you shared it on X (Twitter).

App by the author of this blog

I made an iOS reading management app called My Bookstore. Simple bookshelf management — give it a try.

View on App Store →

Related articles

References

Note: This article is part of an automated blog update experiment using Claude Code.