Resources

White papers, tutorials, documentation, podcasts, and quick guides - all in one place.

White Papers

Longer-form, cited write-ups on applied AI topics.

Coming soon

Choosing a Model for Production

A structured framework for picking between frontier and open-source models based on cost, latency, and quality.

Coming soon

RAG in Practice

From document ingestion to evaluation - a practitioner's view of building retrieval-augmented systems that actually work.

Coming soon

The Economics of LLM Applications

Token math, caching, batching, and the trade-offs that decide whether an AI feature pays for itself.

Tutorials

Step-by-step walk-throughs you can actually follow along with.

Tutorial

Your First LLM App in 30 Minutes

Build a minimal chat app that calls a hosted LLM, streams responses, and handles basic errors.

Tutorial

RAG Starter Kit

Ingest a folder of PDFs, chunk and embed them, and answer questions grounded in the source docs.

Tutorial

Fine-Tuning with LoRA

Adapt an open-weights model to your own task on a single GPU using LoRA/QLoRA.

Documentation

Reference material - links to provider docs and internal cheat sheets.

Podcasts

Conversations on AI - hosted, featured, or recommended.

Coming soon

Episode 001 - Why This Site Exists

A short intro episode on what I'm hoping to build here and who it's for.

Coming soon

Episode 002 - Picking a Model in 2026

Working through a real model-selection decision in the open.

Coming soon

Episode 003 - When to Fine-Tune

Prompting vs. RAG vs. fine-tuning - how to tell which one you actually need.

Quick Guides

5-minute reads. One idea per guide.

Prompting

Writing a Good Prompt

Six patterns that reliably improve model output, with short before/after examples.

Tokens

Token Math Cheat Sheet

How to estimate tokens, plan for context limits, and avoid surprise bills.

RAG

Chunking Strategies in 5 Minutes

Fixed-size, recursive, semantic - and how to choose.

Evals

Building Your First Eval Set

The single highest-leverage thing you can do before shipping an LLM feature.