← Portfolio

ALACRITY HUB

The case studies in this series document operational systems built under real constraints. Each one was an answer to a specific problem with a specific ceiling. Alacrity Hub is the answer to a different question: if there were no ceiling, what would I build?

The same principles applied. Human decision points at every consequential action. Knowledge as the layer that makes everything else portable. Structure before automation. The difference is that this time, I was building for myself — which meant every design choice was pure. No organizational politics, no stakeholder negotiation, no inherited technical debt. Just the problem and the solution.

When I had complete freedom to make the AI as autonomous as I wanted, I chose to keep the human in the loop anyway. That choice — made freely, without organizational pressure — is the most credible proof that the design philosophy behind the case study series is intrinsic, not contextual.


The Dual-Layer Architecture

Alacrity Hub runs on two layers. The distinction between them is philosophical, not just topological.

CLOUD LAYER
Cloudflare Workers

Always on, always accessible. The human-facing surface: productivity hub, interface, planning, decision-making. This is where I live and work.

LOCAL LAYER
Mac Mini / NanoClaw

Local, private, powerful, dormant until called. Runs on 127.0.0.1 only. Never exposed unless the tunnel is intentionally opened. It doesn’t reach out. It waits.

Alacrity Hub — Productivity Hub (Cloud Layer)
Navigation
& Identity
The human-facing surface — productivity tools,
planning, decision-making, mission intake.
Always on. No AI required to function.

Screenshot coming — The Cloudflare-hosted productivity hub, the layer that’s complete on its own

The directionality of the system is always human-first. The Cloudflare Worker doesn’t call NanoClaw unprompted. Every agentic action originates from a human trigger flowing through Mission Control outward to the local runtime. The Mac Mini can be unplugged entirely and the cloud layer remains fully functional — it just loses enhanced capabilities.

The human layer is complete on its own. The AI layer augments it; it doesn’t complete it. This is “AI is the tool. We are the equipment.” translated into infrastructure.

Alacrity Hub — Mission Hub
Paradigm
Selection &
Mission Intake
Where human intent becomes AI action —
mission configuration, agent routing,
and the trigger point for all agentic work

Screenshot coming — The Mission Hub, where every agentic action originates from a human trigger


Three Modes of Human Oversight

Most AI practitioners treat human-in-the-loop as a binary state: on or off. Alacrity Hub implements it as a spectrum with three distinct operational modes, each appropriate for different categories of action.

The Oversight Spectrum
In the Loop
Approve before execution
On the Loop
Observe + intervene anytime
Out of the Loop
Bounded scheduled autonomy
← Full human control Bounded autonomy →
MODE 01
Human in the Loop — Approval Queue

Covers: Code deployment, knowledge vault writes, prompt changes, agent behavior modifications

Structured briefing cards with mandatory sections — What’s Changing, Why Now, Risk If Approved. The human isn’t rubber-stamping; the system provides the minimum context needed to make an informed decision. No agent can deploy code, modify knowledge vaults, or change its own prompts without human review.

Design principle: Consequential actions require deliberate human approval. Every time.

MODE 02
Human on the Loop — Activity Feed

Covers: Web searches, RAG reads, feedback logging, agent execution in progress

Real-time SSE stream of all agent actions. I can see what’s happening and interrupt at any time, but I don’t need to approve each step. Most enterprise AI tools get this wrong — they either require approval on everything, which destroys adoption through friction, or they hide background activity entirely, which erodes trust. The Activity Feed threads the needle: autonomous enough to be useful, transparent enough to be trustworthy.

Design principle: Background actions must remain observable and interruptible without requiring decisions.

MODE 03
Human out of the Loop — Scheduled Operations

Covers: Recurring missions, model health checks (every 6 hours), hub-scheduler dispatching, memory pressure monitoring

Runs on cadence without a human trigger. I set the policy in advance — what runs, when, under what conditions — and the system executes within those boundaries. But “out of the loop” is still bounded. The scheduler can trigger a mission. It cannot approve its own outputs. It cannot modify its own behavior. It cannot escalate to paid models. The autonomy is real but scoped — governed by policies I set in advance.

Design principle: Autonomous operations run within human-defined boundaries. The human governed the scope; the system operates inside it.

The three modes aren’t static categories. They’re a trust calibration framework. As a system proves itself — as its outputs consistently match expectations — I can consciously choose to migrate certain categories of action from In the Loop to On the Loop, or from On the Loop to Out of the Loop. The architecture accommodates that evolution. The human governs the migration.

This framework answers the question every enterprise AI adoption function is actually trying to answer: for a given category of action, how much human involvement is appropriate, and how do you implement that appropriately?

Read the full perspective on human oversight as a governance model →

Alacrity Hub — Mission Control
Approval
Queue
Activity Feed — real-time SSE stream
of agent actions, observable + interruptible

Screenshot coming — Approval Queue (Mode 1) alongside Activity Feed (Mode 2)


Knowledge-First Architecture

Organizations obsess over model selection. The real unlock is building a knowledge and prompt layer that makes model choice a deployment decision, not an architectural one.

Alacrity Hub abstracts the model layer entirely:

  • Virtual model keys (local-reasoning, free-cloud, paid-orchestrator) — agents never call models directly
  • Three RAG vaults (Human, Agent, Architecture) — every model gets the same knowledge base regardless of which one executes
  • Universal Skill Harness — skills packaged as structured prompts that work with any capable LLM

The result: swap from paid Claude to free-tier Gemini to local Qwen without rewriting a single prompt.

Knowledge Abstraction Stack
Agents
Guide Coach Scribe + 15 more
Virtual Keys
local-reasoning free-cloud paid-orchestrator
Models
Qwen 14B Gemini Claude Kimi

Agents never call models directly. Swap any model without touching agent logic.

This is the same principle that runs through Case Study 01 — the form fields that trained Associates to document cases correctly are the same structural logic underlying the RAG pipeline. Structure first. The model is downstream of the knowledge layer.

Aces in Their Places

Not every task needs a frontier model. Alacrity Hub assigns the right LLM to each pipeline stage based on what it does best:

  • 14B parameter model for deep reasoning — PM spec writing, architecture design
  • 7B coding model for implementation
  • 3B background model for low-stakes tasks

Six named paradigms, one selection at mission intake, per-stage routing handled automatically.

The question isn’t “which model is best?” It’s “which model is best for this specific step, at this cost, with this reliability requirement?” That’s FinOps applied to AI operations. $0/month baseline, paid models available as an explicit upgrade, never triggered automatically.


Scale and Proof

This is not a concept. It is a shipped production system.

6
Phases Shipped
18
Agents
68
Tools
$0
Monthly Baseline
Alacrity Hub — Paradigm Shift
Paradigm
Selection
Mission pipeline — intake through
agent routing, execution, and output

Screenshot or video coming — Mission moving through the Paradigm Shift pipeline


Everything in this system is a hypothesis about enterprise AI adoption. The governance model, the practitioner abstraction, the knowledge-first architecture — these are answers to the questions I kept running into at Oracle that organizational constraints wouldn’t let me fully solve.

Alacrity Hub is what happens when you get to test the hypothesis without a ceiling.

The case studies document the constraints. This page documents the proof that the design philosophy behind them was never about the constraints. It was about the work.