Discussion about this post

User's avatar
Varun Godbole's avatar

What I really like about this post is that this conceptual rhyming seems very aligned with what LLMs are naturally good at. That is, exploring various sorts of associations. Rather than using it for things that LLMs are naturally bad at. That is, carefully generating code tokens.

I wonder if we could push this further though. That is, some sort of UI that collects preference data on which conceptual rhymes you found most salient. These then become few-shots in future generations. And each future generation from the model gives you both rhymes that it thinks are most aligned with you and least aligned with you. WDYT?

Expand full comment
Varun Godbole's avatar

As of this comment (Aug 26 2025), what is your current thinking model of choice for tasks like this?

Expand full comment
5 more comments...

No posts