{
  "entries": [
    {
      "id": "big-suno-explorer-release-coming-this-week",
      "date": "2026-02-27",
      "type": "post",
      "title": "Big Suno Explorer release coming in the next few weeks!",
      "tags": [
        "suno explorer",
        "pre-release",
        "announcement"
      ],
      "readingTime": 1,
      "summary": "Early access instructions for the upcoming Suno Explorer release — smart search, playlist sync, standalone web app, and optional cloud sync.",
      "markdown": "Early access instructions below.\n\nIncluding but not limited to:\n- Greatly enhanced smart search\n- Filtering for sounds/samples\n- Improvements to the lyrics explorer\n- Plus a new prompts explorer\n- Two-way playlist sync and quick playlist creation\n- Fully featured standalone web/mobile experience (still requires extension for indexing)\n- Optional paid tier for cloud sync (free users can still export/import data backups manually)\n\nFor early access to the standalone web/mobile app, export your data backup from the extension and import it to suno-explorer-dev.onrender.com/app\n\nHuge thank you to anyone who has supported me, reviewed the extension, commented on Reddit, or even just used the app. Let's hope Suno favors my efforts and doesn't shut it down!",
      "content": "<p>Early access instructions below.</p>\n<p>Including but not limited to:</p>\n<ul><li>Greatly enhanced smart search</li>\n<li>Filtering for sounds/samples</li>\n<li>Improvements to the lyrics explorer</li>\n<li>Plus a new prompts explorer</li>\n<li>Two-way playlist sync and quick playlist creation</li>\n<li>Fully featured standalone web/mobile experience (still requires extension for indexing)</li>\n<li>Optional paid tier for cloud sync (free users can still export/import data backups manually)</li></ul>\n<p>For early access to the standalone web/mobile app, export your data backup from the extension and import it to suno-explorer-dev.onrender.com/app</p>\n<p>Huge thank you to anyone who has supported me, reviewed the extension, commented on Reddit, or even just used the app. Let's hope Suno favors my efforts and doesn't shut it down!</p>",
      "links": {
        "post": "post.html#big-suno-explorer-release-coming-this-week"
      }
    },
    {
      "id": "enclave-live",
      "date": "2026-02-27",
      "type": "update",
      "title": "A preview of Enclave is live at enclave.to",
      "content": "Privacy-first health tracking suite now deployed. Three apps, one encryption core, zero servers. Your data stays yours.",
      "tags": [
        "enclave",
        "release"
      ],
      "links": {
        "project": "#enclave"
      }
    },
    {
      "id": "redstone-tools-domain",
      "date": "2026-02-27",
      "type": "update",
      "title": "Acquired redstone.tools",
      "content": "Redstone Companion has a home. The domain is too perfect not to grab.",
      "tags": [
        "redstone-companion"
      ],
      "links": {
        "project": "#redstone-companion"
      }
    },
    {
      "id": "llm-music-phrasing",
      "date": "2023-08-15",
      "type": "post",
      "title": "Exploring 11/4 phrasing with ChatGPT",
      "summary": "An early experiment using LLMs for music theory exploration and custom guitar annotations.",
      "tags": [
        "music",
        "ai"
      ],
      "readingTime": 4,
      "markdown": "Odd time signatures have always fascinated me. As a guitarist drawn to progressive metal, I spend a lot of time trying to internalize patterns that don't fit neatly into 4/4. 11/4 is particularly tricky—it's long enough to feel disorienting but short enough that you can't just count it out.\n\n## The Problem with 11\n\nMost odd times subdivide into comfortable chunks. 7/4 becomes 4+3 or 3+4. 5/4 is just 3+2. But 11 is awkward. You could do 4+4+3, or 3+3+3+2, or 6+5—each feels different, and none feel natural without practice.\n\nI wanted to find phrasing patterns that would help me internalize 11/4 on guitar. Specifically, I wanted:\n\n- Multiple subdivision approaches to try\n- Accent patterns that emphasize each subdivision\n- Custom annotations I could use while practicing\n\n## Using ChatGPT as a Collaborator\n\nThis was early 2023, when ChatGPT was still novel. I approached it as a brainstorming partner rather than an authority. The conversation went something like:\n\n> \"I want to practice 11/4 on guitar. Can you suggest different ways to subdivide 11 beats and describe how each would feel rhythmically?\"\n\nWhat I got back was surprisingly useful. Not because the model \"understood\" rhythm in any deep way, but because it could systematically enumerate possibilities and describe them in ways that helped me think.\n\n### The Subdivisions\n\nWe explored several patterns:\n\n- **4+4+3** — The \"almost 12\" feel. Two solid phrases, then a truncated third.\n- **3+3+3+2** — Triplet-ish with a short tail. Feels more circular.\n- **5+6** — Asymmetric halves. The 5 rushes into the longer 6.\n- **6+5** — Opposite feel. The 6 establishes, then the 5 compresses.\n- **4+3+4** — Palindromic. Has a nice symmetry to it.\n\n## The Annotation System\n\nI asked ChatGPT to help me design a notation for marking these patterns on tablature. We ended up with a simple system:\n\n```\n| 1 2 3 4 | 1 2 3 4 | 1 2 3 |   ← 4+4+3\n| 1 2 3 | 1 2 3 | 1 2 3 | 1 2 |   ← 3+3+3+2\n| 1 2 3 4 5 | 1 2 3 4 5 6 |       ← 5+6\n```\n\nSimple, but it made a real difference when practicing. Being able to see the groupings while playing helped my hands internalize what my ears were struggling with.\n\n## What I Learned\n\nThe exercise taught me two things:\n\n**First**, LLMs are useful collaborators for systematic exploration. They won't have creative insights, but they'll patiently enumerate options and help you organize your thinking.\n\n**Second**, the value wasn't in finding the \"right\" subdivision—it was in trying all of them. Each pattern activates different musical instincts. 4+4+3 feels rock-adjacent. 3+3+3+2 feels more jazz or fusion. Knowing multiple approaches means I can match the feel to the context.\n\n---\n\nThis experiment was one of the sparks behind [Co-Composer](/#co-composer). If exploring odd rhythms through conversation was useful, what about a visual tool that let you build and hear polyrhythmic patterns in real-time?",
      "content": "<p>Odd time signatures have always fascinated me. As a guitarist drawn to progressive metal, I spend a lot of time trying to internalize patterns that don't fit neatly into 4/4. 11/4 is particularly tricky&mdash;it's long enough to feel disorienting but short enough that you can't just count it out.</p>\n<h2>The Problem with 11</h2>\n<p>Most odd times subdivide into comfortable chunks. 7/4 becomes 4+3 or 3+4. 5/4 is just 3+2. But 11 is awkward. You could do 4+4+3, or 3+3+3+2, or 6+5&mdash;each feels different, and none feel natural without practice.</p>\n<p>I wanted to find phrasing patterns that would help me internalize 11/4 on guitar. Specifically, I wanted:</p>\n<ul><li>Multiple subdivision approaches to try</li>\n<li>Accent patterns that emphasize each subdivision</li>\n<li>Custom annotations I could use while practicing</li></ul>\n<h2>Using ChatGPT as a Collaborator</h2>\n<p>This was early 2023, when ChatGPT was still novel. I approached it as a brainstorming partner rather than an authority. The conversation went something like:</p>\n<blockquote>\"I want to practice 11/4 on guitar. Can you suggest different ways to subdivide 11 beats and describe how each would feel rhythmically?\"</blockquote>\n<p>What I got back was surprisingly useful. Not because the model \"understood\" rhythm in any deep way, but because it could systematically enumerate possibilities and describe them in ways that helped me think.</p>\n<h3>The Subdivisions</h3>\n<p>We explored several patterns:</p>\n<ul><li><strong>4+4+3</strong> &mdash; The \"almost 12\" feel. Two solid phrases, then a truncated third.</li>\n<li><strong>3+3+3+2</strong> &mdash; Triplet-ish with a short tail. Feels more circular.</li>\n<li><strong>5+6</strong> &mdash; Asymmetric halves. The 5 rushes into the longer 6.</li>\n<li><strong>6+5</strong> &mdash; Opposite feel. The 6 establishes, then the 5 compresses.</li>\n<li><strong>4+3+4</strong> &mdash; Palindromic. Has a nice symmetry to it.</li></ul>\n<h2>The Annotation System</h2>\n<p>I asked ChatGPT to help me design a notation for marking these patterns on tablature. We ended up with a simple system:</p>\n<pre><code>| 1 2 3 4 | 1 2 3 4 | 1 2 3 |   &larr; 4+4+3\n| 1 2 3 | 1 2 3 | 1 2 3 | 1 2 |   &larr; 3+3+3+2\n| 1 2 3 4 5 | 1 2 3 4 5 6 |       &larr; 5+6</code></pre>\n<p>Simple, but it made a real difference when practicing. Being able to see the groupings while playing helped my hands internalize what my ears were struggling with.</p>\n<h2>What I Learned</h2>\n<p>The exercise taught me two things:</p>\n<p><strong>First</strong>, LLMs are useful collaborators for systematic exploration. They won't have creative insights, but they'll patiently enumerate options and help you organize your thinking.</p>\n<p><strong>Second</strong>, the value wasn't in finding the \"right\" subdivision&mdash;it was in trying all of them. Each pattern activates different musical instincts. 4+4+3 feels rock-adjacent. 3+3+3+2 feels more jazz or fusion. Knowing multiple approaches means I can match the feel to the context.</p>\n<hr />\n<p>This experiment was one of the sparks behind <a href=\"/#co-composer\">Co-Composer</a>. If exploring odd rhythms through conversation was useful, what about a visual tool that let you build and hear polyrhythmic patterns in real-time?</p>",
      "links": {
        "post": "post.html#llm-music-phrasing"
      }
    },
    {
      "id": "chatgpt-memory",
      "date": "2023-06-20",
      "updated": "2026-02-27",
      "type": "post",
      "title": "Understanding memory in ChatGPT",
      "summary": "Bridging the AI gap: how context windows and memory limitations shape LLM interactions.",
      "tags": [
        "ai",
        "explainer"
      ],
      "readingTime": 3,
      "markdown": "When people first start using ChatGPT, there's often confusion about what it \"remembers.\" Does it learn from our conversations? Does it know what we talked about yesterday? The answer is more nuanced than yes or no, and understanding it makes you a more effective user.\n\n## The Mental Model Problem\n\nHumans have persistent memory. We remember yesterday's conversations, last week's meetings, childhood experiences. When we talk to ChatGPT, we instinctively expect the same.\n\nBut ChatGPT doesn't have memory in this sense. It has a **context window**—a fixed-size buffer that holds the current conversation. Everything the model \"knows\" about your interaction exists within this window. When it fills up, older content gets pushed out.\n\n## What This Means Practically\n\nIn a long conversation, ChatGPT will eventually \"forget\" what you discussed at the beginning. This isn't a bug—it's a fundamental constraint of the architecture.\n\nSigns you've hit context limits:\n\n- The model contradicts something it said earlier\n- It asks for information you already provided\n- Responses become less coherent with prior context\n- It \"forgets\" established conventions or formats\n\n## Working Within the Constraints\n\nOnce you understand the model, you can work with it rather than against it:\n\n### Strategic Summarization\n\nPeriodically ask the model to summarize the key points of your conversation. Then start a new conversation with that summary as context. You're essentially doing manual memory management.\n\n### Explicit Context Injection\n\nDon't assume the model remembers. If you're continuing a previous line of work, re-state the relevant context at the start. \"We're working on X. The current state is Y. The goal is Z.\"\n\n### Chunked Conversations\n\nBreak large tasks into smaller, self-contained conversations. Each conversation has a focused goal and doesn't rely on accumulated context from hours of prior work.\n\n## The Bigger Picture\n\nUnderstanding context windows isn't just about ChatGPT. It's a window into how these systems work. LLMs don't \"think\" between conversations. They don't \"learn\" from your interactions (unless explicitly fine-tuned). Each conversation starts fresh.\n\nThis has implications for trust, privacy, and how we should design AI-assisted workflows. The model isn't building a profile of you. It's not secretly remembering sensitive information. But it's also not learning your preferences over time.\n\n> **Note (2026):** This was written in 2023. Since then, persistent memory features have been added to ChatGPT. But the core concepts about context windows remain relevant—memory features are built on top of this architecture, not replacements for it.\n\n---\n\nUnderstanding these constraints shaped how I approached [my music theory experiments with ChatGPT](post.html#llm-music-phrasing). Knowing the model wouldn't \"remember\" made me more deliberate about structuring conversations and extracting useful outputs before context was lost.",
      "content": "<p>When people first start using ChatGPT, there's often confusion about what it \"remembers.\" Does it learn from our conversations? Does it know what we talked about yesterday? The answer is more nuanced than yes or no, and understanding it makes you a more effective user.</p>\n<h2>The Mental Model Problem</h2>\n<p>Humans have persistent memory. We remember yesterday's conversations, last week's meetings, childhood experiences. When we talk to ChatGPT, we instinctively expect the same.</p>\n<p>But ChatGPT doesn't have memory in this sense. It has a <strong>context window</strong>&mdash;a fixed-size buffer that holds the current conversation. Everything the model \"knows\" about your interaction exists within this window. When it fills up, older content gets pushed out.</p>\n<h2>What This Means Practically</h2>\n<p>In a long conversation, ChatGPT will eventually \"forget\" what you discussed at the beginning. This isn't a bug&mdash;it's a fundamental constraint of the architecture.</p>\n<p>Signs you've hit context limits:</p>\n<ul><li>The model contradicts something it said earlier</li>\n<li>It asks for information you already provided</li>\n<li>Responses become less coherent with prior context</li>\n<li>It \"forgets\" established conventions or formats</li></ul>\n<h2>Working Within the Constraints</h2>\n<p>Once you understand the model, you can work with it rather than against it:</p>\n<h3>Strategic Summarization</h3>\n<p>Periodically ask the model to summarize the key points of your conversation. Then start a new conversation with that summary as context. You're essentially doing manual memory management.</p>\n<h3>Explicit Context Injection</h3>\n<p>Don't assume the model remembers. If you're continuing a previous line of work, re-state the relevant context at the start. \"We're working on X. The current state is Y. The goal is Z.\"</p>\n<h3>Chunked Conversations</h3>\n<p>Break large tasks into smaller, self-contained conversations. Each conversation has a focused goal and doesn't rely on accumulated context from hours of prior work.</p>\n<h2>The Bigger Picture</h2>\n<p>Understanding context windows isn't just about ChatGPT. It's a window into how these systems work. LLMs don't \"think\" between conversations. They don't \"learn\" from your interactions (unless explicitly fine-tuned). Each conversation starts fresh.</p>\n<p>This has implications for trust, privacy, and how we should design AI-assisted workflows. The model isn't building a profile of you. It's not secretly remembering sensitive information. But it's also not learning your preferences over time.</p>\n<blockquote><strong>Note (2026):</strong> This was written in 2023. Since then, persistent memory features have been added to ChatGPT. But the core concepts about context windows remain relevant&mdash;memory features are built on top of this architecture, not replacements for it.</blockquote>\n<hr />\n<p>Understanding these constraints shaped how I approached <a href=\"post.html#llm-music-phrasing\">my music theory experiments with ChatGPT</a>. Knowing the model wouldn't \"remember\" made me more deliberate about structuring conversations and extracting useful outputs before context was lost.</p>",
      "links": {
        "post": "post.html#chatgpt-memory"
      }
    },
    {
      "id": "publicized-primitive-gaming-com-for-general-use",
      "date": "2026-03-20",
      "type": "update",
      "title": "Publicized primitive-gaming.com for general use",
      "tags": [
        "release",
        "games",
        "workbench"
      ],
      "readingTime": 1,
      "content": "",
      "links": {
        "external": "https://primitive-gaming.com"
      }
    },
    {
      "id": "open-sourced-alcove-and-started-on-a-chrome-extension",
      "date": "2026-03-05",
      "type": "update",
      "title": "Open sourced alcove and started on a chrome extension!",
      "tags": [
        "open-source",
        "public",
        "dev"
      ],
      "readingTime": 1,
      "content": "",
      "links": {
        "external": "https://github.com/dizruptr/alcove"
      }
    }
  ]
}