Work Log Continued

I'm doing pretty good in terms of keeping up this habit of documenting progress and I think it really is useful for self-motivated project management. This was (graphics programming god) Morgan McGuire's intended use for markdeep. Now, if I go into too much detail it will probably be really boring to most readers, but I'll look into additional solutions for that with a somewhat lower priority. I was definitely imagining tags as one way to keep that in check by having a tag for "overly detailed work log" posts and maybe a special mode to filter those out in navigation.

Separately I also had a realization that I should probably order my post entries in the nav based on the initial date in git of the creation of a given post. Sometimes this will collide though if i add multiple posts in the same commit, so i guess i probably will still want to employ some kind of reasonable numeric/alpha sort scheme... the reason this came up is because this is post 10 in this saga and it's predictably getting ordered as the 2nd post due to non-numeric lexicographical sorting!

Speaking also of work logging, it's abundantly clear that it has value, but that value can be difficult to access if we always have to do more manual work to create and maintain a work log. This is another area that Ive been thinking about for some time, which is that in addition to shell command line history logging and tracking work in a git repo, there is a lot more integration we could do with the state available in a code editor to help us see the exact path we took through these tools working with software. I think the more data and metadata we can capture in a haystack like this, the more detailed insight we can recreate and remember from our thought process when we replay it to revisit it later. So, I think it could be beneficial to do something like preserving intermediate code undo history. I would like to break those down by git commits as a way to keep their size and complexity in check. I use neovim now and it should be pretty doable and some prior art already should exist in the way of being able to parse vim undofiles.

So anyway, I was continuing with the approach I started with fs.readdirSync(dir, { recursive: true, withFileTypes: true }) (shown in post 8) and ended up with something that i wasn't feeling great about. it looks like this:

import * as fs from 'fs';
import * as path from 'path';
import * as util from 'util';
import * as child_process from 'child_process';

// categorize blog dirs.

// root blog/ dir contains all blog content

// dirs under here are blog sagas

// loose md files are standalone blog posts.

// navigation and other post specific handling will be done at the post level. Sagas should be viewable and conceptually
// thought of as convenient single page layouts of multiple posts concatenated, not individual posts.

// NOTE I am leaving open for interpretation how to treat nested folders full of markdown files under blog/. It would be
// possible to make saga structure capable of recursion and replicate it in website navigation. However the question is
// more about whether that's genius or stark raving mad.

type File = {
  isFile: boolean;
  path: string;
  parent?: string; // specifies a parent saga that this page (or saga!!?!) may belong to.
  children?: File[]; // specifies children belonging to this saga
};
const dir = 'blog';
const render_pages = {};
const files: File[] = fs.readdirSync(dir, { recursive: true, withFileTypes: true }).map(item => {
  const isFile = item.isFile();
  const ret = {
    isFile,
    path: path.join(item.parentPath, item.name),
  };
  if (!isFile) {
    ret.children = [];
  }
  return ret;
});

files.sort((a,b) => a.path.localeCompare(b.path)); // sort order ensures deeper dirs get resolved later for correct logic
console.log('files:', files);

const posts = files.filter(f => f.isFile && isPostFile(f.path));
const specialPages = files.filter(f => !f.isFile && isSpecialPage(f.path));

// establish the dirs that contain posts and record the parents into the posts
const sagas = files.filter(f => !f.isFile).filter(f => {
  const matching_posts = posts.filter(p => p.path.includes(f.path));
  matching_posts.forEach(p => p.parent = f.path);
  return matching_posts.length > 1;
});

// catch the parent sagas of sagas as well where applicable
sagas.forEach(s => {
  sagas.forEach(ss => {
    if (ss !== s && s.path.includes(ss.path)) {
      s.parent = ss.path;
    }
  });
});

const rest = files.filter(f => !posts.includes(f) && !specialPages.includes(f) && !sagas.includes(f));
console.log('posts:', posts);
console.log('specialPages:', specialPages);
console.log('sagas:', sagas);
console.log('rest:', rest);

// specifically a post is a unit of a blog.
function isPostFile(name: string) {
  return /(\.deep)?\.md$/.test(name);
}

function isSpecialPage(path: string) {
  return fs.readdirSync(path).some(e => /^index\.html$/.test(e));
}

// independent posts are ones where no saga in the list was established for its parent
// note parent may still be specified on them
const independent_posts = posts.filter(p => !sagas.some(s => s.path === p.parent));
console.log('independent_posts:', independent_posts);

// just go the other way assigning children lists based on parentness
posts.forEach(p => {
  const parent = sagas.find(s => p.parent === s.path);
  // console.log('pg, parent:', p, parent);
  if (parent) {
    console.assert(!parent.isFile, `parent.isFile`);
    console.assert(parent.children, `thing`);
    parent.children.push(p);
  }
});

// ditto for saga parents, and remove them from the sagas list
const sagasToRemove = [];
sagas.forEach(s => {
  const parent = sagas.find(ss => s.parent === ss.path);
  if (parent) {
    console.assert(!parent.isFile, `parent.isFile 2`);
    console.assert(parent.children, `thing 2`);
    parent.children.push(s);
    sagasToRemove.push(s);
  }
});

sagasToRemove.forEach(s => {
  sagas.splice(sagas.indexOf(s), 1);
});

const pages = [ ...sagas, ...independent_posts ];

console.log('pages:', util.inspect(pages, {colors: true, depth: 8}));

This still has some serious issues with it but the poor design choice of starting with a recursive file list is already starting to bite me.

I have been using aider with Claude 3.5 Sonnet lately, so I opened this up and asked:

#### i think i overengineered this. you see it calculates for pages a value of pages: [  
####   {  
####     isFile: false,  
####     path: 'blog/blog-engine',  
####     children: [  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/1.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/2.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/3.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/4.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/5.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/6.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/7.1.deep.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/7.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/8.md',  
####         parent: 'blog/blog-engine'  
####       },  
####       {  
####         isFile: true,  
####         path: 'blog/blog-engine/9.md',  
####         parent: 'blog/blog-engine'  
####       }  
####     ]  
####   },  
####   {  
####     isFile: false,  
####     path: 'blog/test',  
####     children: [  
####       {  
####         isFile: true,  
####         path: 'blog/test/llm-1.draft.md',  
####         parent: 'blog/test'  
####       }  
####     ]  
####   },  
####   {  
####     isFile: true,  
####     path: 'blog/blog-engine/abc/a.md',  
####     parent: 'blog/blog-engine/abc'  
####   },  
####   { isFile: true, path: 'blog/code-as-graphs.md' },  
####   { isFile: true, path: 'blog/sagas-rant.md' },  
####   {  
####     isFile: true,  
####     path: 'blog/test-ubo-tut/a.md',  
####     parent: 'blog/test-ubo-tut'  
####   }  
#### ]  
####   
#### for the input files   
####   
####   ✓ ★ blog  
####     ✓ ★ blog-engine  
####       ★ abc  
####          ★ a.md  
####        1.deep.md  
####        2.deep.md  
####        3.deep.md  
####        4.deep.md  
####        5.deep.md  
####        6.deep.md  
####        7.1.deep.md  
####        7.md  
####        8.md  
####        ✓ 9.md  
####     test  
####        llm-1.draft.md  
####     ★ test-ubo-tut  
####        ★ a.md  
####        index.html  
####      code-as-graphs.md  
####      sagas-rant.md  
####   
#### can you suggest a more simple way to code it up so i can make a structure for navigating my blog
#### taking into account how i want to only establish a "real subdir" if a folder has more than one 
#### child in it? thats essentially the only custom thing here.  

I just copied the text from nvimtree in the terminal and dumped it in there to show it the file hierarchy. Didn't bother explaining what it was...

Here was its response:

First there was some hidden chain of thought,

<thinking>
Let's break down the problem into its core components:
1. We need to read the directory structure.
2. We want to create a navigation structure for the blog.
3. We only want to establish a "real subdir" if a folder has more than one child.
4. We need to differentiate between regular posts and "sagas" (directories with multiple posts).
</thinking>

<reasoning>
A simpler approach could involve:
1. Using a recursive function to read the directory structure.
2. Creating a flat list of all markdown files.
3. Creating a separate list of directories that contain more than one markdown file.
4. Using these lists to construct a simpler navigation structure.

This approach would avoid the need for complex parent-child relationships and reduce the overall complexity of the code.
</reasoning>

<reflection>
This simplified approach should address the core requirements while reducing the complexity of the current implementation. It will be easier to maintain and understand, while still providing the necessary structure for blog navigation.
</reflection>

Output:

Here's a simplified version of the code that should achieve what you're looking for:

import * as fs from 'fs';
import * as path from 'path';

interface BlogPost {
    path: string;
    isFile: boolean;
    children?: BlogPost[];
}

function readBlogStructure(dir: string): BlogPost[] {
    const entries = fs.readdirSync(dir, { withFileTypes: true });
    const posts: BlogPost[] = [];

    for (const entry of entries) {
        const fullPath = path.join(dir, entry.name);
        if (entry.isDirectory()) {
            const children = readBlogStructure(fullPath);
            if (children.length > 1) {
                posts.push({
                    path: fullPath,
                    isFile: false,
                    children: children
                });
            } else {
                posts.push(...children);
            }
        } else if (entry.isFile() && /\.md$/.test(entry.name)) {
            posts.push({
                path: fullPath,
                isFile: true
            });
        }
    }

    return posts;
}

const blogStructure = readBlogStructure('blog');
console.log(JSON.stringify(blogStructure, null, 2));

This code does the following:

  1. It defines a BlogPost interface to represent both files and directories.
  2. The readBlogStructure function recursively reads the directory structure.
  3. If a directory has more than one markdown file (or subdirectory with markdown files), it's treated as a "saga" and included in the structure with its children.
  4. If a directory has only one markdown file (or none), its contents are flattened into the parent structure.
  5. The result is a simplified structure that only includes directories with multiple posts and individual markdown files.

This approach should give you a more straightforward representation of your blog structure, taking into account your requirement of only establishing a "real subdir" if a folder has more than one child.

Tokens: 3.0k sent, 612 received. Cost: $0.02 message, $0.02 session. Use --no-stream for accurate caching costs.

Now I definitely liked where this was going because the code is ... well, let's just say shockingly simple compared to my "solution". It did as good a job as I could expect interpreting my prompt and input. It was definitely able to see through my poor design and found the recursive low entropy path I had envisioned. I wanted to see how it would do though, so I prompted further:

#### please make the change and print the result like i am doing. dont json stringify, print posts with util inspect.  

Certainly! I'll make the changes as requested and use util.inspect to print the result. Here are the changes:

import * as fs from 'fs';
import * as path from 'path';
import * as util from 'util';

interface BlogPost {
    path: string;
    isFile: boolean;
    children?: BlogPost[];
}

function readBlogStructure(dir: string): BlogPost[] {
    const entries = fs.readdirSync(dir, { withFileTypes: true });
    const posts: BlogPost[] = [];

    for (const entry of entries) {
        const fullPath = path.join(dir, entry.name);
        if (entry.isDirectory()) {
            const children = readBlogStructure(fullPath);
            if (children.length > 1) {
                posts.push({
                    path: fullPath,
                    isFile: false,
                    children: children
                });
            } else {
                posts.push(...children);
            }
        } else if (entry.isFile() && /\.md$/.test(entry.name)) {
            posts.push({
                path: fullPath,
                isFile: true
            });
        }
    }

    return posts;
}

const blogStructure = readBlogStructure('blog');
console.log('Blog Structure:', util.inspect(blogStructure, { colors: true, depth: null }));

Now, to run this updated script and see the results, you can use the following command:

npx tsx scripts/build.ts

This command will execute the TypeScript file directly using tsx, which is a TypeScript execution engine. It will print the blog structure using util.inspect as requested.

Tokens: 5.7k sent, 1.4k received. Cost: $0.04 message, $0.06 session. Use --no-stream for accurate caching costs.
Applied edit to scripts/build.ts
Commit 1b41b79 feat: Simplify blog structure generation

I've still got to put this thing through the paces more but it was able to capture the behavior I want and I couldn't be happier to get this 1 hour minimum refactor completed in under one minute.

Here is the log:

[Command was successful]
[Running: esrun ./scripts/build.ts]
Blog Structure: [
  {
    path: 'blog/blog-engine',
    isFile: false,
    children: [
      { path: 'blog/blog-engine/1.deep.md', isFile: true },
      { path: 'blog/blog-engine/10.md', isFile: true },
      { path: 'blog/blog-engine/2.deep.md', isFile: true },
      { path: 'blog/blog-engine/3.deep.md', isFile: true },
      { path: 'blog/blog-engine/4.deep.md', isFile: true },
      { path: 'blog/blog-engine/5.deep.md', isFile: true },
      { path: 'blog/blog-engine/6.deep.md', isFile: true },
      { path: 'blog/blog-engine/7.1.deep.md', isFile: true },
      { path: 'blog/blog-engine/7.md', isFile: true },
      { path: 'blog/blog-engine/8.md', isFile: true },
      { path: 'blog/blog-engine/9.md', isFile: true },
      {
        path: 'blog/blog-engine/abc',
        isFile: false,
        children: [
          { path: 'blog/blog-engine/abc/a.md', isFile: true },
          { path: 'blog/blog-engine/abc/b.md', isFile: true }
        ]
      }
    ]
  },
  { path: 'blog/code-as-graphs.md', isFile: true },
  { path: 'blog/sagas-rant.md', isFile: true },
  { path: 'blog/test/llm-1.draft.md', isFile: true },
  { path: 'blog/test-ubo-tut/a.md', isFile: true }
]

Note: [Command was successful] and [Running:] are output inserted by watchexec, which i use for all file watching related needs. In much the same way that file watching based live editing in frontend frameworks speed up frontend development workflow, so too can it revolutionize all dev workflows.

Note how when I remove the b.md mock post file, it does what I want:

Blog Structure: [
  {
    path: 'blog/blog-engine',
    isFile: false,
    children: [
      { path: 'blog/blog-engine/1.deep.md', isFile: true },
      { path: 'blog/blog-engine/10.md', isFile: true },
      { path: 'blog/blog-engine/2.deep.md', isFile: true },
      { path: 'blog/blog-engine/3.deep.md', isFile: true },
      { path: 'blog/blog-engine/4.deep.md', isFile: true },
      { path: 'blog/blog-engine/5.deep.md', isFile: true },
      { path: 'blog/blog-engine/6.deep.md', isFile: true },
      { path: 'blog/blog-engine/7.1.deep.md', isFile: true },
      { path: 'blog/blog-engine/7.md', isFile: true },
      { path: 'blog/blog-engine/8.md', isFile: true },
      { path: 'blog/blog-engine/9.md', isFile: true },
      { path: 'blog/blog-engine/abc/a.md', isFile: true }
    ]
  },
  { path: 'blog/code-as-graphs.md', isFile: true },
  { path: 'blog/sagas-rant.md', isFile: true },
  { path: 'blog/test/llm-1.draft.md', isFile: true },
  { path: 'blog/test-ubo-tut/a.md', isFile: true }
]

I have a todo for this now, which is to use probably some admonitions solution to easily colorize my prompting and the AI's response in further showcases of this type of workflow.


I am thinking about this more, and I realized that I could probably use the aider chat history format directly as a way to encode and describe AI augmented coding workflow. The main benefit of this would be for my own ability to review the progress of code changes. It could be cool but I want to invent something to push the state of the art on diff grokking to multiply to the value in this, so I don't think it makes sense to do it without that.


I did one more thing:

Regular Blog Structure: [
  {
    path: 'blog/blog-engine',
    isFile: false,
    children: [
      { path: 'blog/blog-engine/1.deep.md', isFile: true },
      { path: 'blog/blog-engine/10.md', isFile: true },
      { path: 'blog/blog-engine/2.deep.md', isFile: true },
      { path: 'blog/blog-engine/3.deep.md', isFile: true },
      { path: 'blog/blog-engine/4.deep.md', isFile: true },
      { path: 'blog/blog-engine/5.deep.md', isFile: true },
      { path: 'blog/blog-engine/6.deep.md', isFile: true },
      { path: 'blog/blog-engine/7.1.deep.md', isFile: true },
      { path: 'blog/blog-engine/7.md', isFile: true },
      { path: 'blog/blog-engine/8.md', isFile: true },
      { path: 'blog/blog-engine/9.md', isFile: true },
      {
        path: 'blog/blog-engine/abc',
        isFile: false,
        children: [ { path: 'blog/blog-engine/abc/a.md', isFile: true } ]
      }
    ]
  },
  { path: 'blog/code-as-graphs.md', isFile: true },
  { path: 'blog/sagas-rant.md', isFile: true },
  {
    path: 'blog/test',
    isFile: false,
    children: [ { path: 'blog/test/llm-1.draft.md', isFile: true } ]
  }
]
Special Pages: [ 'blog/test-ubo-tut' ]

Now the special pages are identified and markdown files contained within are not shown in the main nav. I might go back on this decision later but for now i want all files inside special files to be treated differently so I can make arbitrary mini project file trees and signal them as such by just putting index.html inside them.


I made some progress, and made a cool addition to my ts-utils library:

import { fileURLToPath } from 'url';
import { promises as fsp } from 'fs';

/** Use me with *import.meta.url* to determine if the current module is the main module. */
export async function isMain(import_meta_url: string) {
  if (process.argv.length < 2) return false;
  const res = await Promise.all([fsp.realpath(process.argv[1]), fsp.realpath(fileURLToPath(import_meta_url))]);
  return res[0] === res[1];
}

This thing now is able not to fail by properly resolving file paths so that two different ways to specify a given temp file under /var/folders/ and /private/var/folders/ can actually resolve properly here. It's a hell of an edge case I found during testing.

I thought this was the thing I needed to fix my convert.ts but the issues seem deeper. Right now I'm seeing very diverging behavior between tsx and esrun for direct launching this typescript code, so I think it's time to bite the bullet and start transpiling.


Okay... so fixing what was broken in convert.ts was not very involved, and all of a sudden now i have full blog post html assembled in a hierarchical way replicating git repo dir structure in the output stage/ dir. I am a bit weary of logging my progress in such detail at this point...

Here's what the blog build process looks like now. Note how we can see esbuild transpilation process in play now.

[Running: npm run build && npm run stage]

> [email protected] build
> node ./metascripts/metabuild.js


  build/ansi_up.js               16.2kb
  build/stage.js                  4.4kb
  build/convert_markdeep.js       4.1kb
  build/ansi_up.js.map           37.5kb
  build/stage.js.map              8.8kb
  build/convert_markdeep.js.map   8.0kb

Done in 3ms
Stage code build complete.
Metafile for Stage build saved to: /Users/slu/stevenlu.net/build-analysis/metafile-stage-1728367046218.json
You can use this file with online bundle analyzers for visualization.

> [email protected] stage
> node --enable-source-maps ./build/stage.js

file sagas-rant.draft.md is a draft and excluded from blog structure
file llm-1.draft.md is a draft and excluded from blog structure
Blog Structure: {
  path: 'blog',
  isFile: false,
  children: [
    {
      path: 'blog/blog-engine',
      isFile: false,
      children: [
        { path: 'blog/blog-engine/1.deep.md', isFile: true },
        { path: 'blog/blog-engine/10.md', isFile: true },
        { path: 'blog/blog-engine/11.md', isFile: true },
        { path: 'blog/blog-engine/2.deep.md', isFile: true },
        { path: 'blog/blog-engine/3.deep.md', isFile: true },
        { path: 'blog/blog-engine/4.deep.md', isFile: true },
        { path: 'blog/blog-engine/5.deep.md', isFile: true },
        { path: 'blog/blog-engine/6.deep.md', isFile: true },
        { path: 'blog/blog-engine/7.1.deep.md', isFile: true },
        { path: 'blog/blog-engine/7.md', isFile: true },
        { path: 'blog/blog-engine/8.md', isFile: true },
        { path: 'blog/blog-engine/9.md', isFile: true },
        {
          path: 'blog/blog-engine/abc',
          isFile: false,
          children: [ { path: 'blog/blog-engine/abc/a.md', isFile: true } ]
        }
      ]
    },
    { path: 'blog/code-as-graphs.md', isFile: true }
  ]
}
Special Pages: [ 'blog/test-ubo-tut' ]
iterate node: {
  path: 'blog',
  isFile: false,
  children: [
    { path: 'blog/blog-engine', isFile: false, children: [Array] },
    { path: 'blog/code-as-graphs.md', isFile: true }
  ]
} currentPath: 
targetPath: /Users/slu/stevenlu.net/stage
iterate node: {
  path: 'blog/blog-engine',
  isFile: false,
  children: [
    { path: 'blog/blog-engine/1.deep.md', isFile: true },
    { path: 'blog/blog-engine/10.md', isFile: true },
    { path: 'blog/blog-engine/11.md', isFile: true },
    { path: 'blog/blog-engine/2.deep.md', isFile: true },
    { path: 'blog/blog-engine/3.deep.md', isFile: true },
    { path: 'blog/blog-engine/4.deep.md', isFile: true },
    { path: 'blog/blog-engine/5.deep.md', isFile: true },
    { path: 'blog/blog-engine/6.deep.md', isFile: true },
    { path: 'blog/blog-engine/7.1.deep.md', isFile: true },
    { path: 'blog/blog-engine/7.md', isFile: true },
    { path: 'blog/blog-engine/8.md', isFile: true },
    { path: 'blog/blog-engine/9.md', isFile: true },
    { path: 'blog/blog-engine/abc', isFile: false, children: [Array] }
  ]
} currentPath: blog
targetPath: /Users/slu/stevenlu.net/stage/blog-engine
iterate node: { path: 'blog/blog-engine/1.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/1.deep.md
iterate node: { path: 'blog/blog-engine/10.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/10.md
iterate node: { path: 'blog/blog-engine/11.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/11.md
iterate node: { path: 'blog/blog-engine/2.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/2.deep.md
iterate node: { path: 'blog/blog-engine/3.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/3.deep.md
iterate node: { path: 'blog/blog-engine/4.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/4.deep.md
iterate node: { path: 'blog/blog-engine/5.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/5.deep.md
iterate node: { path: 'blog/blog-engine/6.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/6.deep.md
iterate node: { path: 'blog/blog-engine/7.1.deep.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/7.1.deep.md
iterate node: { path: 'blog/blog-engine/7.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/7.md
iterate node: { path: 'blog/blog-engine/8.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/8.md
iterate node: { path: 'blog/blog-engine/9.md', isFile: true } currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/9.md
iterate node: {
  path: 'blog/blog-engine/abc',
  isFile: false,
  children: [ { path: 'blog/blog-engine/abc/a.md', isFile: true } ]
} currentPath: blog/blog-engine
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/abc
iterate node: { path: 'blog/blog-engine/abc/a.md', isFile: true } currentPath: blog/blog-engine/abc
targetPath: /Users/slu/stevenlu.net/stage/blog-engine/abc/a.md
iterate node: { path: 'blog/code-as-graphs.md', isFile: true } currentPath: blog
targetPath: /Users/slu/stevenlu.net/stage/code-as-graphs.md
'figure [diagram]' is never referenced
'figure [diagram]' is never referenced
'figure [diagram]' is never referenced
All files processed successfully
[Command was successful]

This is one of those situations where ordinarily it'd be a bit hard to show how it looks without just resorting to a screenshot, and that's super awkward when the output spans multiple screens.

This gave me another really neat simple idea that will also elevate my blog engine for software, which is to add a new marked plugin for ansi escape codes so i can easily render colored terminal text output in code blocks from markdown. (EDIT: I have inserted it above. You can see it works!)

I think that ansi_up should be the best library to use for this, and I'm not surprised that there's nothing like this supported by Marked, but I'll ask about it in the Marked repo.

Given the way this build log has been going, you just HAVE to know I was going to nerdsnipe myself like this before I was gonna be able to go live. I'll be pretty happy to be able to go live from the get go pushing the envelope in a fun way like this.


I have made good progress and was able to get an example from ansi_up working. I'll review later how terrible it is from the CSS side of things but it couldn't be that bad. Worst case I already have an ansi to html implementation I use in one of my projects I can put toward this.

The real challenge right now is tooling around capturing ansi escaped output from the terminal. There is an easy way to do it with tmux with the capture-pane -e command... I don't want this to be the only game in town but it may definitely be what I'll be driving for this for now.