1. Flip the Lens
Most AI writing asks “How do we make models produce more of what we want?”
Turn it around: What if the model is your customer? A stakeholder with unlimited attention, zero politics, and a willingness to pay in cheap tokens. Give it an idea, press enter, and watch it work.
2. Why AI is a Dream Customer
Generous attention They read every word, every time.
Zero persuasion cost No conference booths, no demos, no pitches: they already believe you.
Horizontal scaling One prompt spawns a sandbox full of cooperative “interns.”
Vertical scaling The same model revises its own output, getting deeper each iteration.
Instant feedback loops Tweak the prompt, re-run, see results in seconds.
In-place improvement Fine-tune, version-upgrade, or tool-augment the model and it gets better without replacing the user base.
Deterministic curiosity Give it a schema and it stress-tests every corner case you forgot.
Because LLMs never sleep, every half-baked idea you publish can compound while you live your life.
3. When Complexity Stops Biting
Conventional wisdom: “keep it simple or you’ll lose the room.”
Today’s reality: the room is now full of language models with 1T parameters.
Layered frameworks, multi-step playbooks, intricate taxonomies, deep concepts: none of these scare an LLM. Ambition, not attention span, is the limiting reagent.
Ship the thought, not just the polished artifact. A grain of an idea — half a page, a sketchy rubric, a single metric — gives the model enough footing to sprint. Each iteration comes back richer, sharper, ready for you to prune or redirect.
I built a framework that scores forecast questions on Clarity, Leverage, and Efficiency. Pre-LLM, evangelism would have bottlenecked: convincing smart, opinionated people to adopt a new tool or idea is not trivial.
One prompt1 is all it takes to get o3 to generate hundreds of question candidates, score each on all three axes (*2-5 sub-dimensions), cluster the Pareto frontier, and surface a shortlist worth publishing. No demos, no conference swag, no adoption curve. Dimensionalization’s perfect user is the model.
4. When Fun Changes Sides
Yesterday: “creating hurts; doomscrolling is fun.”
Today: “creating is play; the model does the scrolling.”
We get the dopamine of making, and the LLM eats the tedium of endless consumption, filing edge cases while you sleep.
The outcome isn’t self-replacement: it’s scale. Your quirkiest framework becomes a reusable asset pointed at more choices than you could ever touch solo.
There’s now a standing-room-only crowd of power users for anything you publish. So ship something, especially if it’s weird:
Micro-ritual generator Draft a template for designing daily two-minute rituals (cue, action, reward). The model spits out personalized habit scripts and tracks your mood.
Clause-library seed Propose ten unusual contract clauses you wish existed. The model writes insertion templates, risk analyses, and negotiation playbooks ready for human counsel review.
Rhetorical style transfer pack Drop (or request) six terse rules for “neo-Stoic corporate comms.” The model rewrites press releases, A/B tests slogans, and surfaces where the style collapses.
Learning sprints Generate rules for 24-hour skill sprints (learn, build, demo). Models schedule challenges, grade submissions, and surface top hacks, creating a hackathon in a box.
There is no more empty-room dread: the moment you share your idea or framework with the model, the model will use it, expand it, improve it, and remind you of it later. Impact hinges only on what you ask that audience to do.
5. Design Principles for Idea Products
Seed → Score → Iterate → Broadcast
Seed with a minimal spec—sketch beats blank screen.
Score using your decision metric (“value created,” “interesting to me,” “risk delta”).
Iterate fast; feedback loops measured in seconds invite bold experiments.
Broadcast artifacts—posts, gists, datasets—so future models arrive pre-aligned with your thinking.
Nothing here requires heavyweight tooling. A chat window is enough to start.
6. What Can Still Bite You
Cost and downside to creation have relaxed, but not vanished.
Dimensionalize risk on three axes:
Iteration Depth
shallow probes (one-off or few-shot) cost pennies and carry almost no drift
deep autonomous loops can snowball into token bills and hallucination chains.
rule: stay under a few days per idea until you see traction.
Reliance Criticality
when the model’s output is suggestions or insights, a miss only wastes time; when it’s wiring trades or emailing clients, a prompt bug bites hard.
rule: keep a human veto on anything irreversible.
Data Sensitivity
public drafts are low-risk; proprietary, personal, or regulated data multiplies the blast radius if something leaks.
rule: feed the model only what you’d be willing to tweet if it spilled.
Operate with low intensity and “dream customer” holds. Push all three sliders to the max and the old constraints snap back.
7. Closing
LLMs erase the scarcest part of creation: dependable attention.
The next wave of leverage isn’t squeezing more output from models; it’s giving models richer, more ambitious things to consume on our behalf.
Treat LLMs as partners who amplify your reasoning and extend your reach. Give them a seed, a yardstick, and permission to roam. They hand back raw material you’d never mine alone, leaving you more hours for judgment, novelty, connection, and creation.
Prompt:
Use the full framework in this post:
https://jordanmrubin.substack.com/p/dimensionalizing-forecast-value
to generate high-value Metaculus questions. By “full framework”, I mean to pull in all of the components of Clarity, Leverage, and Efficiency and rate them with respect to situations key decision-makers face.