makeyourAI.work the machine teaches the human

Week 3

Week 3: Talking to Models Properly

LLM foundations, structured outputs, prompt architecture, and secure API usage.

>

This lesson defines the operational shape of an LLM feature: request assembly, model invocation, output validation, persistence, and observability.

Checkpoint

LLM Interface Gate

This week ends with a gated checkpoint. You progress by shipping a real artifact, not by reading passively.

Deliverable

A prompt contract and structured-output integration design.

Each week leaves behind portfolio evidence that compounds into the final SaaS and its operating narrative.

Week Thesis

What the machine expects from you.

This lesson defines the operational shape of an LLM feature: request assembly, model invocation, output validation, persistence, and observability.

Teams fail when they treat the model call as the product. In reality, the product lives in the orchestration around the model: retries, output checks, storage, user messaging, and fallback behavior.

An LLM API is an unreliable-but-useful subsystem. Design around it the way you would around any external dependency: narrow interface, explicit validation, good telemetry, graceful degradation.

This lesson reframes prompting as interface design. You are not “talking nicely” to the model; you are constraining a probabilistic component into a usable contract.

Lesson Stack

Three dense lessons, one enforced deliverable.

Lesson Preview

How LLM APIs Fit Into Real Products

Models are one service in a larger system, not the whole app.

This lesson defines the operational shape of an LLM feature: request assembly, model invocation, output validation, persistence, and observability.

Teams fail when they treat the model call as the product. In reality, the product lives in the orchestration around the model: retries, output checks, storage, user messaging, and fallback behavior.

An LLM API is an unreliable-but-useful subsystem. Design around it the way you would around any external dependency: narrow interface, explicit validation, good telemetry, graceful degradation.

Lesson Preview

Prompting as Interface Design

Good prompts define boundaries, expected outputs, and decision rules.

This lesson reframes prompting as interface design. You are not “talking nicely” to the model; you are constraining a probabilistic component into a usable contract.

Vague prompts create vague systems. In production that means hidden assumptions, unstable output formats, and higher downstream validation cost.

A good prompt is closer to an API contract than a chat message. It defines role, goal, input boundary, forbidden behavior, output shape, and how ambiguity should be handled.

Lesson Preview

Prompt Injection, Secrets, and AI Transparency

Every LLM feature is also a security and trust problem.

This lesson is about the risks that appear the moment a model consumes untrusted input and influences user-facing behavior.

An unguarded LLM feature can leak instructions, expose secrets, follow hostile context, or mislead users about certainty and source. That is not a prompt problem. That is a product risk problem.

Assume any input channel can try to steer the model away from its intended role. Build layered defenses: prompt structure, context separation, tool restrictions, redaction, and user transparency.

Portfolio Artifact

What survives the week.

spec

Prompt Contract

A production-oriented contract for role, input, constraints, output shape, and validation expectations.