I made a further set of large changes to the blog engine today, but mostly in an invisible infrastructure component. I am planning to have a lot more data than I'll want to put into S3, granted its billing rate of 2.3 cents per GB-month isn't astronomical, but Backblaze B2s rate of 0.6 cents (and twice as large free tier) is significantly superior and will allow me to host a lot more data before it makes me want to clean things out.
The transition is already made and I had to make a lot of little tweaks to get it to a good place.
I agonized for what felt like 2 hours over what I perceived as an issue moving from S3 where I could configure S3 to be a public web serving bucket but still reject any request not coming from cloudflare's own IP ranges. The issue is:
Backblaze's web hosting functionality is more barebones and one notable missing piece provided by basic web servers that's missing is in the index.html redirects.
/
URIs
to index.html. Interestingly the target must be a whole page so I opted to just redirect the root URI to
stevenlu.net, this means even if you go to www.stevenlu.net it lands at stevenlu.net.I switched the static content sync mechanism in my "stage" script from aws-cli
to rclone
. It's a very similar sync call. Figured it's a good time to switch to get familiar with a tool that can portably target multiple cloud storage backends.
I tweaked some CSS to make font sizes more consistent and a few things such as,
Forgot to actually report in the earlier posts that I added a light/dark theme toggle to the site, though i did get hyperfixed on some interesting AI coding failure modes in the course of hacking on that earlier. I had aspirations for a more extensive set of theming features for the site, but i mentioned themability in passing to the AI and it added the neat and clean light/dark switcher and some good CSS variable architecture to facilitate it, so I wasn't going to say no to that. I think it will also actually make it fairly straightforward to expand it farther than just two modes in the future by doing some fun CSSOM stuff.
(() => {
const giscus_script_tag_attrs = [
'src="https://giscus.app/client.js"',
'data-repo="unphased/Giscus-discussions-stevenlu.net"',
'data-repo-id="R_kgDOM4lP6w"',
'data-category="Show and tell"',
'data-category-id="DIC_kwDOM4lP684Ci4bG"',
'data-mapping="pathname"',
'data-strict="1"',
'data-reactions-enabled="1"',
'data-emit-metadata="0"',
'data-input-position="bottom"',
'data-theme=' + (localStorage.getItem('theme') || 'light'),
'data-lang="en"',
'crossorigin="anonymous"',
'async'
];
const tag = '<' + 'script ' + giscus_script_tag_attrs.join('\\n') + '></' + 'script' + '>';
// insert to end of page
document.write(tag);
})();
I am planning to explore the use of compression for file storage at rest but it's not hugely important for blog sites like this when the bulk of storage will be used by already compressed image formats. I will explore that in a separate site/app I'm going to host with a similar approach though.
I just found another issue which is that if i commit and push too quickly after saving a change in this project, the scripts are still running and the sync may end up deleting a bunch of HTML content that is still in the middle of being generated. This is actually extremely awkward but nothing some sentinel lockout files used for mutex logic wouldn't fix...