7 Comments
User's avatar
Varun Godbole's avatar

What I really like about this post is that this conceptual rhyming seems very aligned with what LLMs are naturally good at. That is, exploring various sorts of associations. Rather than using it for things that LLMs are naturally bad at. That is, carefully generating code tokens.

I wonder if we could push this further though. That is, some sort of UI that collects preference data on which conceptual rhymes you found most salient. These then become few-shots in future generations. And each future generation from the model gives you both rhymes that it thinks are most aligned with you and least aligned with you. WDYT?

Expand full comment
Jordan Rubin's avatar

Thanks!

I think that good rhymes are likely to have a few characteristics:

1) real, deep structural similarity

2) familiar domain (to the user)

3) surprising implications

(We could dimensionalize to get more elements of good rhymes if we wanted)

I think that 1 and 3 are likely to improve most with raw model intelligence, and secondarily with a few good shots (which can be included in rhyme.md).

2 is the only one that really depends on the user’s preferences and history. And my experience is that (at least for ChatGPT) the existing memory features suffice for my use case.

Expand full comment
Varun Godbole's avatar

As of this comment (Aug 26 2025), what is your current thinking model of choice for tasks like this?

Expand full comment
Jordan Rubin's avatar

I almost exclusively use ChatGPT5 Thinking. Before that, o3.

Expand full comment
Max Heinritz's avatar

I had to coax the free version of ChatGPT to reread the prompt, but eventually it got going!

https://chatgpt.com/share/689d41fd-a5d4-8012-9801-4cc846e0249f

Expand full comment
Jordan Rubin's avatar

Here’s my version of the same!

https://chatgpt.com/s/t_689d4f9280b881919865e15fab779dd9

Expand full comment
Jordan Rubin's avatar

Ah, sometimes you need to enable search mode to read external links.

Expand full comment