Skip to content

Instantly share code, notes, and snippets.

@Jarvis-Legatus
Created August 11, 2025 14:00
Show Gist options
  • Save Jarvis-Legatus/8401815ad80a1bdf0e94be9daedaa5b0 to your computer and use it in GitHub Desktop.
Save Jarvis-Legatus/8401815ad80a1bdf0e94be9daedaa5b0 to your computer and use it in GitHub Desktop.
Mind map for YouTube video: Designing AI-Intensive Applications - swyx

Designing AI-Intensive Applications - swyx

TL;DR This talk, delivered at an AI engineering conference, emphasizes the rapid evolution of AI engineering from simple GPT wrappers to complex, multidisciplinary applications. The speaker highlights the conference's role in tracking this progress, fostering innovation (e.g., MCP protocol adoption), and moving the field from demos to production. The core thesis posits that AI engineering is at a pivotal "standard model" formation stage, similar to early physics or traditional software engineering (ETL, MVC). The speaker then proposes several candidate standard models: the updated LLM OS (2025), the LLM SDLC (emphasizing value in later stages like evals and security), and various approaches to building effective agents. Critically, the speaker argues for shifting focus from arguable terminology (e.g., "agent" vs. "workflow") to the practical ratio of human input to valuable AI output. He introduces his own SPAED model (Sync, Plan, Analyze, Deliver, Evaluate) as a generalizable framework for building AI-intensive applications, urging attendees to collectively define the new standard models that will drive the industry forward and build truly useful products.


Information Mind Map

🧠 Designing AI-Intensive Applications: The Quest for a Standard Model in AI Engineering

πŸ”‘ Conference Context & Evolution of AI Engineering

  • Purpose of the Conference (AIE):
    • Introduce conference agenda and updates.
    • Track the evolution of AI engineering.
    • Serve as a platform for AI engineers to push frontiers.
  • Conference Growth & Design:
    • 3,000 last-minute registrants (stress quantified by "Genie Coefficient").
    • Doubled tracks from last year to cover "all of AI."
    • Prioritizes responsiveness (vs. "Europe's") and technical depth (vs. "TED").
    • Content driven by attendee surveys (e.g., computer-using agents, AI & crypto).
    • Actionable Item:
      • Fill out the survey for next year's conference planning.
  • Innovation in AI Engineering (Conference's Role):
    • First conference to feature an MCP (Multi-modal Conversational Protocol) talk accepted by MCP.
    • Official chatbot (Sam Julian, Writer) and voice bot (Quinn & John, Daily; Elizabeth Triken, Vappy) using MCP.
  • Speaker's Historical Talks on AI Engineering Evolution:
    • 2023: Three types of AI engineers.
    • 2024: AI engineering becoming multidisciplinary (World's Fair with multiple tracks).
    • 2025 (New York): Evolution and focus on agent engineering.
    • June 2025 (Current): Focus on the "standard model."
  • Current State of AI Engineering:
    • Shift from "low status" GPT wrappers to a field where "all of you are rich."
    • Consistent lesson: "not overcomplicate things" (Anthropic, Eric Suns, Greg Brockman, AMP Code).
    • Still "very early fuel" – "a lot of alpha to mind" for AI engineers.

πŸ”‘ The Search for a "Standard Model" in AI Engineering

  • Analogy to Physics:
    • Compares current moment to the 1927 Solvay Conference (Einstein, Curie, etc.).
    • A specific period (1940s-1970s) in physics defined the "Standard Model" that lasted 50+ years.
    • Thesis: This is the time to set out the basic ideas for AI engineering.
  • Existing Standard Models in Traditional Engineering:
    • ETL (Extract, Transform, Load)
    • MVC (Model-View-Controller)
    • CRUD (Create, Read, Update, Delete)
    • MapReduce
  • Challenge with Current AI Patterns:
    • RAG (Retrieval-Augmented Generation) is prevalent but questioned ("RAG is dead," "long context killed RAG," "fine-tuning kills RAG").
    • RAG is "definitely not the full answer."
  • Core Question: What other standard models might emerge to guide thinking in AI engineering?

πŸ”‘ Candidate Standard Models in AI Engineering

  • 1. The LLM OS:
    • Earliest standard model, from Karpavi (2023).
    • Updated for 2025: Includes multimodality, standard set of tools, and MCP as default protocol.
  • 2. The LLM SDLC (Software Development Life Cycle):
    • Two versions: one with intersecting tooling concerns.
    • Key Insight (from Anker of Brain Trust):
      • Early parts (LLMs, monitoring, RAG) are increasingly commodity (free tier).
      • Real money/value comes from "real hard engineering work": evals, security orchestration.
      • Implication: Pushing AI engineering "from demos into production."
  • 3. Building Effective Agents:
    • "Received wisdom" from Anthropic (Barry, co-author).
    • OpenAI's different definition, pushing "swarm" concept (Dominic's Agents SDK).
    • Speaker's Approach (Descriptive, Top-Down Model):
      • Based on words people use to describe agents: intent, control flow, memory, planning, tool use.

πŸ”‘ Redefining Value: Human Input vs. AI Output

  • Critique of "Agent" Terminology:
    • Speaker's AI News tool is not considered an "agent" by PyTorch lead Sum.
    • Assertion: Focus on "delivering value instead of arguable terminology."
  • Mental Model: Human Input vs. Valuable AI Output Ratio:
    • More interesting than debating "workflow vs. agent."
    • Examples of Ratio Evolution:
      • Copilot Era: Debounced input (few chars) -> autocomplete (1:1-ish).
      • ChatGPT: Few queries -> responding query (1:1).
      • Reasoning Models: 1 to 10 ratio (more AI output per input).
      • New Agents / Deep Research Notebook: Even higher ratio (e.g., NotebookLM by Rian Martin).
      • Ambient Agents (Zero to One): No human input -> interesting AI output.
    • Conclusion: This ratio is a more useful discussion point.

πŸ”‘ Speaker's Personal Model: SPAED (for AI-Intensive Applications)

  • AI News as a Case Study:
    • "Bunch of scripts in a trench coat."
    • Process repeated three times (Discord, Reddit, Twitter scrapes).
    • Workflow: Scrape -> Plan -> Recursively Summarize -> Format -> Evaluate.
  • Generalizing to AI-Intensive Applications (Thousands of AI Calls):
    • S - Sync: Ingest data.
    • P - Plan: Determine next steps/actions.
    • A - Analyze: Parallel process, reduce many to one.
    • D - Deliver: Present content/output to user.
    • E - Evaluate: Assess performance and improve.
  • Integrated AI Engineering Elements:
    • Process into knowledge graph.
    • Turn into structured outputs.
    • Generate code (e.g., ChatGPT with Canvas, Claude with Artifacts).
  • Speaker's Current Mental Model: This SPAED framework is what he's actively developing.

πŸ”‘ Call to Action

  • Overarching Goal: Think about what the new standard model for AI engineering is.
  • Purpose: What can everyone use to improve their applications?
  • Ultimate Aim: Build products that people want to use.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment