{ "type": "entry", "published": "2022-09-24 12:43-0700", "url": "https://gregorlove.com/2022/09/add-img-srcset-parsing/", "category": [ "issue", "microformats" ], "syndication": [ "https://github.com/microformats/php-mf2/issues/242" ], "in-reply-to": [ "https://github.com/microformats/php-mf2/issues" ], "name": "Add img srcset parsing", "content": { "text": "Parse img.srcset per parsing issue #7. MicroMicro now supports this, so once we have 2+ implementations and rough consensus, we can update the parsing specification.", "html": "<p>Parse <code>img.srcset</code> per <a href=\"https://github.com/microformats/microformats2-parsing/issues/7\">parsing issue #7</a>. MicroMicro now <a href=\"https://github.com/jgarber623/micromicro/releases/tag/v3.1.0\">supports</a> this, so once we have 2+ implementations and rough consensus, we can update the parsing specification.</p>" }, "author": { "type": "card", "name": "gRegor Morrill", "url": "https://gregorlove.com/", "photo": "https://gregorlove.com/site/assets/files/3473/profile-2016-med.jpg" }, "post-type": "reply", "_id": "31592150", "_source": "95", "_is_read": true }
I like the way this work-in-progress is organised—it’s both a book and a personal website that’ll grow over time.
{ "type": "entry", "published": "2022-09-20T11:20:52Z", "url": "https://adactio.com/links/19466", "category": [ "webbook", "frontend", "development", "ui", "engineering", "personal", "publishing", "writing", "indieweb" ], "bookmark-of": [ "https://www.toheeb.com/" ], "content": { "text": "Web UI Engineering Book - toheeb.com\n\n\n\nI like the way this work-in-progress is organised\u2014it\u2019s both a book and a personal website that\u2019ll grow over time.", "html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://www.toheeb.com/\">\nWeb UI Engineering Book - toheeb.com\n</a>\n</h3>\n\n<p>I like the way this work-in-progress is organised\u2014it\u2019s both a book and a personal website that\u2019ll grow over time.</p>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "bookmark", "_id": "31480284", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2022-09-18T21:18:16+00:00", "url": "https://werd.io/2022/wordpressindieweb-as-the-os-of-the-open-social-web", "category": [ "Technology" ], "bookmark-of": [ "https://www.zylstra.org/blog/2022/09/wordpressindieweb-as-the-os-of-the-open-social-web/" ], "name": "WordPress+IndieWeb as the OS of the Open Social Web", "content": { "text": "Nice indieweb thoughts and presentation. As an aside, I\u2019ve added Hypothesis annotations to my site, inspired by Ton\u2019s site. #Technology", "html": "<p>Nice indieweb thoughts and presentation. As an aside, I\u2019ve added Hypothesis annotations to my site, inspired by Ton\u2019s site. <a href=\"https://werd.io/tag/Technology\" class=\"p-category\">#Technology</a></p>" }, "author": { "type": "card", "name": "Ben Werdm\u00fcller", "url": "https://werd.io/profile/benwerd", "photo": "https://werd.io/file/5d388c5fb16ea14aac640912/thumb.jpg" }, "post-type": "bookmark", "_id": "31449836", "_source": "191", "_is_read": true }
I like these slides from a talk by @ton about enabling more IndieWeb features in WordPress… Includes Micro.blog screenshot too!
{ "type": "entry", "author": { "name": "Manton Reece", "url": "https://www.manton.org/", "photo": "https://micro.blog/manton/avatar.jpg" }, "url": "https://www.manton.org/2022/09/16/i-like-these.html", "content": { "html": "<p><a href=\"https://www.zylstra.org/blog/2022/09/wordpressindieweb-as-the-os-of-the-open-social-web/\">I like these slides</a> from a talk by <a href=\"https://micro.blog/ton\">@ton</a> about enabling more IndieWeb features in WordPress\u2026 Includes Micro.blog screenshot too!</p>", "text": "I like these slides from a talk by @ton about enabling more IndieWeb features in WordPress\u2026 Includes Micro.blog screenshot too!" }, "published": "2022-09-16T08:23:18-05:00", "post-type": "note", "_id": "31411655", "_source": "12", "_is_read": true }
Blog!
Blog your heart! Blog about something you’ve learned, blog about something you’re interested in.
Excellent advice from Robin:
There are no rules to blogging except this one: always self-host your website because your URL, your own private domain, is the most valuable thing you can own. Your career will thank you for it later and no-one can take it away.
{ "type": "entry", "published": "2022-09-12T09:03:39Z", "url": "https://adactio.com/links/19440", "category": [ "indieweb", "personal", "publishing", "writing", "sharing", "blogging", "blogs" ], "bookmark-of": [ "https://www.robinrendle.com/notes/take-care-of-your-blog-/" ], "content": { "text": "Take Care of Your Blog\n\n\n\n\n Blog!\n \n Blog your heart! Blog about something you\u2019ve learned, blog about something you\u2019re interested in.\n\n\nExcellent advice from Robin:\n\n\n There are no rules to blogging except this one: always self-host your website because your URL, your own private domain, is the most valuable thing you can own. Your career will thank you for it later and no-one can take it away.", "html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://www.robinrendle.com/notes/take-care-of-your-blog-/\">\nTake Care of Your Blog\n</a>\n</h3>\n\n<blockquote>\n <p>Blog!</p>\n \n <p>Blog your heart! Blog about something you\u2019ve learned, blog about something you\u2019re interested in.</p>\n</blockquote>\n\n<p>Excellent advice from Robin:</p>\n\n<blockquote>\n <p>There are no rules to blogging except this one: always self-host your website because your URL, your own private domain, is the most valuable thing you can own. Your career will thank you for it later and no-one can take it away.</p>\n</blockquote>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "bookmark", "_id": "31333408", "_source": "2", "_is_read": true }
This is one way of several based on the authorship spec (https://indieweb.org/authorship-spec#Algorithm). The p-author h-card
inside the h-feed
might make the most sense in your case, but you could also have a u-author
property inside each h-entry
that links to a page that has your author h-card
. I do this with my posts, with an invisible link to my homepage.
{ "type": "entry", "published": "2022-09-07 16:22-0700", "url": "https://gregorlove.com/2022/09/this-is-one-way-of-several/", "category": [ "indieweb", "microformats" ], "in-reply-to": [ "https://jamesg.blog/2022/09/07/authorship-homepage/" ], "content": { "text": "This is one way of several based on the authorship spec (https://indieweb.org/authorship-spec#Algorithm). The p-author h-card inside the h-feed might make the most sense in your case, but you could also have a u-author property inside each h-entry that links to a page that has your author h-card. I do this with my posts, with an invisible link to my homepage.", "html": "<p>This is one way of several based on the authorship spec (<a href=\"https://indieweb.org/authorship-spec#Algorithm\">https://indieweb.org/authorship-spec#Algorithm</a>). The <code>p-author h-card</code> inside the <code>h-feed</code> might make the most sense in your case, but you could also have a <code>u-author</code> property inside each <code>h-entry</code> that links to a page that has your author <code>h-card</code>. I do this with my posts, with an invisible link to my homepage.</p>" }, "author": { "type": "card", "name": "gRegor Morrill", "url": "https://gregorlove.com/", "photo": "https://gregorlove.com/site/assets/files/3473/profile-2016-med.jpg" }, "post-type": "reply", "_id": "31267938", "_source": "95", "_is_read": true }
Watching the locations/venues session video from IndieWebCamp Berlin. I wasn’t able to participate in this one, even remotely, but glad to see IndieWeb folks were able to get together.
{ "type": "entry", "author": { "name": "Manton Reece", "url": "https://www.manton.org/", "photo": "https://micro.blog/manton/avatar.jpg" }, "url": "https://www.manton.org/2022/09/04/watching-the-locationsvenues.html", "content": { "html": "<p>Watching the <a href=\"https://archive.org/details/locations-venues-indie-web-camp-berlin-2022\">locations/venues session video</a> from IndieWebCamp Berlin. I wasn\u2019t able to participate in this one, even remotely, but glad to see IndieWeb folks were able to get together.</p>", "text": "Watching the locations/venues session video from IndieWebCamp Berlin. I wasn\u2019t able to participate in this one, even remotely, but glad to see IndieWeb folks were able to get together." }, "published": "2022-09-04T09:45:32-05:00", "post-type": "note", "_id": "31206916", "_source": "12", "_is_read": true }
A couple issues I noticed on https://indieweb.rocks/gregorlove.com:
nickname
post-type article
On the second one I’m not sure what it’s finding. My posts don’t have microformats properties "post-type" or "article." If it doesn’t understand how to process articles, the error message could be clarified so it doesn’t sound like an issue in the published post.
{ "type": "entry", "published": "2022-08-31 19:12-0700", "url": "https://gregorlove.com/2022/08/indieweb-rocks-issues/", "category": [ "indieweb" ], "in-reply-to": [ "https://ragt.ag/code/indieweb.rocks" ], "content": { "text": "A couple issues I noticed on https://indieweb.rocks/gregorlove.com:\n\nUninterpreted h-card properties: nickname\n\n\tUninterpreted h-entry properties: post-type article\n\nOn the second one I\u2019m not sure what it\u2019s finding. My posts don\u2019t have microformats properties \"post-type\" or \"article.\" If it doesn\u2019t understand how to process articles, the error message could be clarified so it doesn\u2019t sound like an issue in the published post.", "html": "<p>A couple issues I noticed on <a href=\"https://indieweb.rocks/gregorlove.com\">https://indieweb.rocks/gregorlove.com</a>:</p>\n\n<ol><li>Uninterpreted h-card properties: <code>nickname</code>\n</li>\n\t<li>Uninterpreted h-entry properties: <code>post-type article</code>\n</li>\n</ol><p>On the second one I\u2019m not sure what it\u2019s finding. My posts don\u2019t have microformats properties \"post-type\" or \"article.\" If it doesn\u2019t understand how to process articles, the error message could be clarified so it doesn\u2019t sound like an issue in the published post.</p>" }, "author": { "type": "card", "name": "gRegor Morrill", "url": "https://gregorlove.com/", "photo": "https://gregorlove.com/site/assets/files/3473/profile-2016-med.jpg" }, "post-type": "reply", "_id": "31160364", "_source": "95", "_is_read": true }
{ "type": "entry", "published": "2022-08-30T17:21:34+0000", "url": "https://seblog.nl/2022/08/30/1/storing-posts-with-git-internals", "category": [ "indieweb", "meta" ], "name": "Storing posts by juggling with Git internals", "content": { "text": "I have been wanting to rework the core of this website for a couple of years now, but since the current setup still works, and since I have many other things to do, and finally since I am very picky about how I want it to work, I have never really finished this part at all. This makes me stuck at the save version of this site, both visually as behind the scenes.\nNow that I am in between jobs I wanted to work on it a bit more, but I still do not have time enough to fully finish it. I guess it all comes down to a few choices I have to make regarding trade-offs. In order to make better decisions, I wanted document my current storage and the one I have been working on. After I wrote it all out I think I am deciding not to use it, but it was a nice exploration so I will share it anyway.\nThe description heavily leans on some knowledge about Git, which is software for versioning your code, or in this case, plain text files. I will try to explain a bit along the way but it is useful to have some familiarity with it already.\ntl;dr: I did fancy with Git but might not pursue.\nHow it is currently done\nAt the time of writing, my posts are stored in a plain text format with a lot of folders. It is derived from the format which the Kirby CMS expects: folders for pages with text files within it, of which the name of the text file dictates the template that is being used to render the page. In my case, it is always entry.txt.\nI have one folder per year, one folder per day of the year and one folder per post of the day. In that last folder is the entry.txt and some other files related to the post, like pictures, but also metadata like received and sent Webmentions.\nAn example of the tree view is below. It shows two entries on two days in one year. Note that days and years also have their own .txt file that is actually almost empty and pretty much useless in this setup, but still required for Kirby to work properly. The first day of the year my site is broken because it does not automatically create the required year.txt (or did I fix that finally?).\n./content\n\u2514\u2500\u2500 2022\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 001\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 1\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 entry.txt\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 .webmentions\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 1641117941-f6bc3209f3f33f0cb8e4d92e5d46b5090b53aa11.json\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 pings.json\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 day.txt\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 002\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 1\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 some_image.jpg\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 entry.txt\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 day.txt\n\u00a0\u00a0\u00a0 \u2514\u2500\u2500 year.txt\nAlso note that there is a hidden .webmentions folder which contains a pings.json for all the sent webmentions and a JSON file with timestamp and content hash in the name for every received webmention. Not in the diagram but also present are some other folders for pages like ./login/login.txt (because that is how Kirby works) and ./isbn/9780349411903/book.txt (for books).\nAll these files are stored in a Git repository, which I manually update every so often (more bimonthly than weekly, sadly) via SSH to my server. I give it a very generic commit name (\u2019sync\u2019 or so) and push to a private repo on Github, which takes a while because the commits and the repo contain all those images and all those folders.\nWhat is wrong with this\nThe main point of wanting to move off of this structure by Kirby, is that it requires those placeholder pages in my content folder. I have no need for a ./login/login.txt: the login page is just a feature of the software and should be handled by that part of the code. But at least that file contains some text for that page: the files for year.txt and day.txt are completely useless.\nAnother point is that I want to make the Git commits automatically with every Micropub request: Git provides me with a history, but only if I actually commit the files once I changed them. Also, if I do not push the changes to Github, I have no backup of recent posts.\nThe metadata of the received and sent Webmentions are now also available in the repo. This is nice, as it stores the information right next to the post it belongs to, but on the other hand it feels kind of polluting: these Webmentions contain content by others, where as the rest of the content is by me. There is some other external content hidden in the entry.txt file but I\u2019ll get to that later.\nThe last point is that the full size images are stored in the repo and every book and article about Git says that you should not use it to store big files in it. Doing a git status takes a while and also the pushes are much slower than any other Git repository I work with.\nGit history: the Git Object Model\nBefore I go further into the avenue I am taking to solve the problem, I need to explain a bit about the Git Object Model, also known as \u2018how Git works under the hood\u2019. For a more thorough explanation, see this chapter in the Git Book.\nAs you\u2019ll learn from that chapter, every object is represented as a file, referenced by the SHA1 hash of its contents. And there are three (no, four) types of objects:\n\nblobs, which are the contents of files tracked by Git (and thus also the versions of those files)\n\ntrees, which are listings of filenames with references to blobs or other trees. These trees together create the file structure of a version.\n\ncommits, which are versions. A commit contains a reference to the root tree of the files you are tracking, a parent commit (the previous version) and a message and some metadata.\n\ntags, are not mentioned by the chapter, but do exists: these look like commits, but create a way to store a message with a tag (making annotated tags, I\u2019ll explain plain tags soon).\nNote that Git does not store diffs, it always stores the full contents of every version of the file, albeit zlib compressed and sometimes even packed in a single file, but let\u2019s not get into that right now.\nGit\u2019s tags and branches are just files and folders (they can have / in their names) which contain the hashes (names) of the specific commits they point to. The tags can also point to a tag object, which will then contain a message about the tag (which makes them \u2018annotated tags\u2019).\nThis all brings me to the final point about my storage: for every new post, Git has to create a lot of files. First, it needs to add a blob for the entry.txt, possibly also a blob for the image and blobs for other metadata. Then it needs to create a tree for the entry folder, listing entry.txt and if present the filenames of the images and metadata files. Then it creates a new tree for the day, with all the existing entries plus the newly created one. Then it creates a new tree for the year, to point to this new version (tree) of the day. Then it creates a new tree for the root, with this new version of the year in it. And finally it also needs to create a commit object to point to that new root tree. Every update requires all these new trees. The trees are cheap, but it feels wasteful.\nAlso note that a version of a file always relies on the version of all other files. This is what you want for code (code is designed to work with other code), but it does not feel like the right model for posts (I might come back on this tho).\nAnd there is also the question of identifiers: currently, my posts are identified as year, day of year, number (2022/242/1), but especially that number can only be found in the name of the folder and thus in the tree, not in the blob. I have not yet found a good solution for this, but maybe I am seeing too many problems.\nThe new setup\nTo get rid of some of the trees, I tried to apply my knowledge of the Git Object Model to store my posts in another way. To do this, I used the commands suggested by the chapter in the Git Book in a script that looped over all my files, to store them in a new blank repo to try things out.\nFor each year, for each day, for each post, I would find the entry.txt and put the contents in a Git blob with git hash-object -w ./content/2022/242/1/entry.txt. The resulting hash I used in the command git update-index --add --cacheinfo 100644 $hash entry.txt to stage the file for a new tree. I would do that too for all images and related files, and then I would run git write-tree to write the tree and get the hash for it and git commit-tree $hash -m \"commit\" to create a commit based on it (with a bad message indeed). With that last hash I would run git update-ref refs/heads/2022/242/1 $hash to create a branch for that commit. (I contemplate adding an annotated tag in between, for storing some metadata like \u2018published at\u2019 date.)\nThis would result in a Git repository with over 10,000 branches (I have many posts) neatly organised in folders per year and day. When one were to check out one of these branches, just the files of that posts will appear in the root of your repo: there are no folders. When you check out another branch, other files will appear. This is not how Git usually works, but it decouples all posts from one-another.\nMultiple types of pages\nThe posts I describe above all follow the year-day-number pattern because they are posts: they are sequential entries tied to a date. There are other objects I track, though, that are not date-specific. One example is topical wiki-style pages: these pages may receive edits over time, but their topic is not tied to a date. (I don\u2019t have these yet.)\nAnother example is the books that I track to base my \u2018read\u2019 posts off. I haven\u2019t posted them in a while, but I would like to expand this book collection to also include other types of objects to reference, such as movies, games or locations. These objects also have no date to them attached, at least not a date meaningful to my posts.\nI could generate UUIDs for these objects and pages, and store branches for those commits in the same way Git does store it\u2019s objects internally, with a folder per first two characters of the hash (or UUID) and a filename of the rest:\n./refs/heads\n\u251c\u2500\u2500 0a\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 8342d2-d6f1-4363-a287-a32948d04eaa\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 edcb13-433c-48d2-b683-a407c3a88f57\n\u2514\u2500\u2500 3d\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 243a27-114e-4eee-9bd8-2a51b01939e6\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 25965b-2da5-422d-abce-f3337fa97fc4\n\u00a0\u00a0\u00a0 \u2514\u2500\u2500 611b59-499a-48a0-b931-afe06192e778\nI could even reference the same post/object with multiple identifiers this way. Maybe I want to give every book a UUID, but also reference it by its ISBN. The downside to that, however, is that I need to update both branches to point to the same commit once I make an update do the book-page.\nDrawbacks of the approach\nThe multiple identifiers are probably not feasible, but there are some other drawbacks too. My main concern is that it is much harder to know whether or not you pushed all the changes: one would have to loop over all 10,000+ branches and perform a push or check. In this loop you would probably have to check out the branch as well. It is of course better to just push right after you make a change, but my point is that the \u2018just for sure\u2019 push is a lot of work.\nAnother drawback is actually the counter to what I initially was seeking: wiki-style pages might actually reference each other, and thus their version may depend on a version of another page. In this case, you would want the history to capture all the pages, just as the normal Git workings do. My problem was with the date-specific posts, but once you are mixing date-specific and wiki-style pages, you might be better off with the all-file history.\nOne problem this whole setup still does not solve is that of large files. The git status command is much faster for it does not have to check all the blobs in the repo to get an answer, but the files are still in the repo, taking up space. And there do exist other solutions for big files in git, such as Git LFS, the Large File Storage extension.\nAlso, I am still not 100% sure it is a good idea to store metadata in the Git commits and tags. When we already store the identifier in the tree objects, I thought I could also add the \u2018published at\u2019 date into the commit. Information about the author is already present, and as my site supports private posts, it also seemed like a reasonable location to store lists of people who can view the post. But again, maybe that should be stored in another way, and not be so deeply integrated with Git.\nConclusion\nIt was very helpful to write this all out, for by doing so I made up my mind: this is just all a bit too complicated and way too much deeply coupled to Git internals. I would be throwing out the \u2018just plain text files\u2019 principle, because I would store a lot of data in Git\u2019s objects, which are actually not plain text, since they are compressed with a certain algorithm.\nMy favourite Git GUI Fork is able to work with the monstrous repository my script produced, but many of the features are now strange and unusable, because the repo is so strangely set up. I would have to create my own software to maintain the integrity of the repo and that could lead to bugs and thus faulty data and maybe even data loss.\nI still think there are some nice properties to the system I describe above, but I won\u2019t be using it. But I learned a few new things about Git internals along the way, and I hope you did too.", "html": "<p>I have been wanting to rework the core of this website for a couple of years now, but since the current setup still works, and since I have many other things to do, and finally since I am very picky about how I <em>want</em> it to work, I have never really finished this part at all. This makes me stuck at the save version of this site, both visually as behind the scenes.</p>\n<p>Now that I am in between jobs I wanted to work on it a bit more, but I still do not have time enough to fully finish it. I guess it all comes down to a few choices I have to make regarding trade-offs. In order to make better decisions, I wanted document my current storage and the one I have been working on. After I wrote it all out I think I am deciding not to use it, but it was a nice exploration so I will share it anyway.</p>\n<p>The description heavily leans on some knowledge about Git, which is software for versioning your code, or in this case, plain text files. I will try to explain a bit along the way but it is useful to have some familiarity with it already.</p>\n<p><em>tl;dr: I did fancy with Git but might not pursue.</em></p>\n<h2>How it is currently done</h2>\n<p>At the time of writing, my posts are stored in a plain text format with a lot of folders. It is derived from the format which the <a href=\"https://getkirby.com\">Kirby CMS</a> expects: folders for pages with text files within it, of which the name of the text file dictates the template that is being used to render the page. In my case, it is always <code>entry.txt</code>.</p>\n<p>I have one folder per year, one folder per day of the year and one folder per post of the day. In that last folder is the <code>entry.txt</code> and some other files related to the post, like pictures, but also metadata like received and sent <a href=\"https://indieweb.org/Webmention\">Webmentions</a>.</p>\n<p>An example of the <code>tree</code> view is below. It shows two entries on two days in one year. Note that days and years also have their own <code>.txt</code> file that is actually almost empty and pretty much useless in this setup, but still required for Kirby to work properly. The first day of the year my site is broken because it does not automatically create the required <code>year.txt</code> (or did I fix that finally?).</p>\n<pre><code>./content\n\u2514\u2500\u2500 2022\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 001\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 1\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 entry.txt\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 .webmentions\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 1641117941-f6bc3209f3f33f0cb8e4d92e5d46b5090b53aa11.json\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 pings.json\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 day.txt\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 002\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 1\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 some_image.jpg\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 entry.txt\n\u00a0\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 day.txt\n\u00a0\u00a0\u00a0 \u2514\u2500\u2500 year.txt</code></pre>\n<p>Also note that there is a hidden <code>.webmentions</code> folder which contains a <code>pings.json</code> for all the sent webmentions and a JSON file with timestamp and content hash in the name for every received webmention. Not in the diagram but also present are some other folders for pages like <code>./login/login.txt</code> (because that is how Kirby works) and <code>./isbn/9780349411903/book.txt</code> (for <a href=\"https://seblog.nl/isbn/9780349411903\">books</a>).</p>\n<p>All these files are stored in a Git repository, which I <em>manually</em> update every so often (more bimonthly than weekly, sadly) via SSH to my server. I give it a very generic commit name (\u2019sync\u2019 or so) and push to a private repo on Github, which takes a while because the commits and the repo contain all those images and all those folders.</p>\n<h2>What is wrong with this</h2>\n<p>The main point of wanting to move off of this structure by Kirby, is that it requires those placeholder pages in my content folder. I have no need for a <code>./login/login.txt</code>: the login page is just a feature of the software and should be handled by that part of the code. But at least that file contains some text for that page: the files for <code>year.txt</code> and <code>day.txt</code> are completely useless.</p>\n<p>Another point is that I want to make the Git commits automatically with every <a href=\"https://indieweb.org/Micropub\">Micropub</a> request: Git provides me with a history, but only if I actually commit the files once I changed them. Also, if I do not push the changes to Github, I have no backup of recent posts.</p>\n<p>The metadata of the received and sent Webmentions are now also available in the repo. This is nice, as it stores the information right next to the post it belongs to, but on the other hand it feels kind of polluting: these Webmentions contain content by <em>others</em>, where as the rest of the content is by me. There is some other external content hidden in the <code>entry.txt</code> file but I\u2019ll get to that later.</p>\n<p>The last point is that the full size images are stored in the repo and every book and article about Git says that you should not use it to store big files in it. Doing a <code>git status</code> takes a while and also the pushes are much slower than any other Git repository I work with.</p>\n<h2>Git history: the Git Object Model</h2>\n<p>Before I go further into the avenue I am taking to solve the problem, I need to explain a bit about the Git Object Model, also known as \u2018how Git works under the hood\u2019. For a more thorough explanation, see <a href=\"https://git-scm.com/book/en/v2/Git-Internals-Git-Objects\">this chapter</a> in the Git Book.</p>\n<p>As you\u2019ll learn from that chapter, every object is represented as a file, referenced by the SHA1 hash of its contents. And there are three (no, four) types of objects:</p>\n<ul><li>\n<strong>blobs</strong>, which are the contents of files tracked by Git (and thus also the versions of those files)</li>\n<li>\n<strong>trees</strong>, which are listings of filenames with references to blobs or other trees. These trees together create the file structure of a version.</li>\n<li>\n<strong>commits</strong>, which are versions. A commit contains a reference to the root tree of the files you are tracking, a parent commit (the previous version) and a message and some metadata.</li>\n<li>\n<strong>tags</strong>, are not mentioned by the chapter, but do exists: these look like commits, but create a way to store a message with a tag (making annotated tags, I\u2019ll explain plain tags soon).</li>\n</ul><p>Note that Git does not store diffs, it always stores the full contents of every version of the file, albeit zlib compressed and sometimes even packed in a single file, but let\u2019s not get into that right now.</p>\n<p>Git\u2019s tags and branches are just files and folders (they can have <code>/</code> in their names) which contain the hashes (names) of the specific commits they point to. The tags can also point to a tag object, which will then contain a message about the tag (which makes them \u2018annotated tags\u2019).</p>\n<p>This all brings me to the final point about my storage: for every new post, Git has to create a lot of files. First, it needs to add a blob for the <code>entry.txt</code>, possibly also a blob for the image and blobs for other metadata. Then it needs to create a tree for the entry folder, listing <code>entry.txt</code> and if present the filenames of the images and metadata files. Then it creates a new tree for the day, with all the existing entries plus the newly created one. Then it creates a new tree for the year, to point to this new version (tree) of the day. Then it creates a new tree for the root, with this new version of the year in it. And finally it also needs to create a commit object to point to that new root tree. Every update requires all these new trees. The trees are cheap, but it feels wasteful.</p>\n<p>Also note that a version of a file always relies on the version of all other files. This is what you want for code (code is designed to work with other code), but it does not feel like the right model for posts (I might come back on this tho).</p>\n<p>And there is also the question of identifiers: currently, my posts are identified as year, day of year, number (<code>2022/242/1</code>), but especially that number can only be found in the name of the folder and thus in the tree, not in the blob. I have not yet found a good solution for this, but maybe I am seeing too many problems.</p>\n<h2>The new setup</h2>\n<p>To get rid of some of the trees, I tried to apply my knowledge of the Git Object Model to store my posts in another way. To do this, I used the commands suggested by the chapter in the Git Book in a script that looped over all my files, to store them in a new blank repo to try things out.</p>\n<p>For each year, for each day, for each post, I would find the <code>entry.txt</code> and put the contents in a Git blob with <code>git hash-object -w ./content/2022/242/1/entry.txt</code>. The resulting hash I used in the command <code>git update-index --add --cacheinfo 100644 $hash entry.txt</code> to stage the file for a new tree. I would do that too for all images and related files, and then I would run <code>git write-tree</code> to write the tree and get the hash for it and <code>git commit-tree $hash -m \"commit\"</code> to create a commit based on it (with a bad message indeed). With that last hash I would run <code>git update-ref refs/heads/2022/242/1 $hash</code> to create a branch for that commit. (I contemplate adding an annotated tag in between, for storing some metadata like \u2018published at\u2019 date.)</p>\n<p>This would result in a Git repository with over 10,000 branches (I have many posts) neatly organised in folders per year and day. When one were to check out one of these branches, just the files of that posts will appear in the root of your repo: there are no folders. When you check out another branch, other files will appear. This is not how Git usually works, but it decouples all posts from one-another.</p>\n<h2>Multiple types of pages</h2>\n<p>The posts I describe above all follow the year-day-number pattern because they are posts: they are sequential entries tied to a date. There are other objects I track, though, that are not date-specific. One example is topical wiki-style pages: these pages may receive edits over time, but their topic is not tied to a date. (I don\u2019t have these yet.)</p>\n<p>Another example is the books that I track to base my \u2018read\u2019 posts off. I haven\u2019t posted them in a while, but I would like to expand this book collection to also include other types of objects to reference, such as movies, games or locations. These objects also have no date to them attached, at least not a date meaningful to my posts.</p>\n<p>I could generate UUIDs for these objects and pages, and store branches for those commits in the same way Git does store it\u2019s objects internally, with a folder per first two characters of the hash (or UUID) and a filename of the rest:</p>\n<pre><code>./refs/heads\n\u251c\u2500\u2500 0a\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 8342d2-d6f1-4363-a287-a32948d04eaa\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 edcb13-433c-48d2-b683-a407c3a88f57\n\u2514\u2500\u2500 3d\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 243a27-114e-4eee-9bd8-2a51b01939e6\n\u00a0\u00a0\u00a0 \u251c\u2500\u2500 25965b-2da5-422d-abce-f3337fa97fc4\n\u00a0\u00a0\u00a0 \u2514\u2500\u2500 611b59-499a-48a0-b931-afe06192e778</code></pre>\n<p>I could even reference the same post/object with multiple identifiers this way. Maybe I want to give every book a UUID, but also reference it by its ISBN. The downside to that, however, is that I need to update both branches to point to the same commit once I make an update do the book-page.</p>\n<h2>Drawbacks of the approach</h2>\n<p>The multiple identifiers are probably not feasible, but there are some other drawbacks too. My main concern is that it is much harder to know whether or not you pushed all the changes: one would have to loop over all 10,000+ branches and perform a push or check. In this loop you would probably have to check out the branch as well. It is of course better to just push right after you make a change, but my point is that the \u2018just for sure\u2019 push is a lot of work.</p>\n<p>Another drawback is actually the counter to what I initially was seeking: wiki-style pages might actually reference each other, and thus their version may depend on a version of another page. In this case, you would want the history to capture all the pages, just as the normal Git workings do. My problem was with the date-specific posts, but once you are mixing date-specific and wiki-style pages, you might be better off with the all-file history.</p>\n<p>One problem this whole setup still does not solve is that of large files. The <code>git status</code> command is much faster for it does not have to check all the blobs in the repo to get an answer, but the files are still in the repo, taking up space. And there do exist other solutions for big files in git, such as <a href=\"https://git-lfs.github.com/\">Git LFS</a>, the Large File Storage extension.</p>\n<p>Also, I am still not 100% sure it is a good idea to store metadata in the Git commits and tags. When we already store the identifier in the tree objects, I thought I could also add the \u2018published at\u2019 date into the commit. Information about the author is already present, and as my site supports private posts, it also seemed like a reasonable location to store lists of people who can view the post. But again, maybe that should be stored in another way, and not be so deeply integrated with Git.</p>\n<h2>Conclusion</h2>\n<p>It was very helpful to write this all out, for by doing so I made up my mind: this is just all a bit too complicated and way too much deeply coupled to Git internals. I would be throwing out the \u2018just plain text files\u2019 principle, because I would store a lot of data in Git\u2019s objects, which are actually not plain text, since they are compressed with a certain algorithm.</p>\n<p>My favourite Git GUI <a href=\"https://git-fork.com/\">Fork</a> is able to work with the monstrous repository my script produced, but many of the features are now strange and unusable, because the repo is so strangely set up. I would have to create my own software to maintain the integrity of the repo and that could lead to bugs and thus faulty data and maybe even data loss.</p>\n<p>I still think there are some nice properties to the system I describe above, but I won\u2019t be using it. But I learned a few new things about Git internals along the way, and I hope you did too.</p>" }, "author": { "type": "card", "name": "Sebastiaan Andeweg", "url": "https://seblog.nl/", "photo": "https://seblog.nl/photo.jpg" }, "post-type": "article", "_id": "31122609", "_source": "1366", "_is_read": true }
Can you feel the energy?
{ "type": "entry", "published": "2022-08-30T07:52:54Z", "url": "https://adactio.com/links/19401", "category": [ "html", "energy", "markup", "creativity", "writing", "sharing", "indieweb", "frontend", "development", "complexity", "simplicity" ], "bookmark-of": [ "https://html.energy/" ], "content": { "text": "html energy\n\n\n\nCan you feel the energy?", "html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://html.energy/\">\nhtml energy\n</a>\n</h3>\n\n<p>Can you feel the energy?</p>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "bookmark", "_id": "31110137", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2022-08-23T03:31:07+00:00", "url": "https://werd.io/2022/farewell-house", "name": "Farewell, house", "content": { "text": "Well, we sold the house. A new family gets to enjoy the space, and the incredible surrounds. It\u2019s the start of a new chapter for us, too.I was there over the weekend, and the memories were overwhelming: the four walls of my parents\u2019 former bedroom held newly-staged furniture for show, but I could hear the laughter, remember talking to my mother at the end of the day, could hear her feeding tube apparatus rolling across the floor. So much happened there. It\u2019s sad to see it go, but the memories stay with us. All we\u2019re really leaving behind is wood, stone, and plaster.Throughout the sale, our agent Florence Sheffer was wonderful. She held our hands through the whole process, and was as fun to work with as she was knowledgable and connected. She consistently went above and beyond to help us. I\u2019d recommend her to anyone who wants to buy or sell a home in Santa Rosa and the surrounding area.I\u2019m not sure what I\u2019ll end up doing with the indieweb website that I made for the house. Probably I\u2019ll just let the domain expire. Here it is, archived for posterity on the Internet Archive.", "html": "<p><img src=\"https://werd.io/file/630449e5482c4b7bd97e0532/thumb.jpg\" alt=\"\" width=\"1024\" height=\"683\" /></p><p>Well, we sold <a href=\"https://werd.io/2022/my-indieweb-real-estate-website-part-two\">the house</a>. A new family gets to enjoy the space, and the <a href=\"https://5405spainave.com/our-experiences.html\">incredible surrounds</a>. It\u2019s the start of a new chapter for us, too.</p><p>I was there over the weekend, and the memories were overwhelming: the four walls of my parents\u2019 former bedroom held newly-staged furniture for show, but I could hear the laughter, remember talking to my mother at the end of the day, could hear her feeding tube apparatus rolling across the floor. So much happened there. It\u2019s sad to see it go, but the memories stay with us. All we\u2019re really leaving behind is wood, stone, and plaster.</p><p>Throughout the sale, our agent <a href=\"https://www.coldwellbankerhomes.com/ca/santa-rosa/agent/florence-sheffer/aid_37212/\">Florence Sheffer</a> was wonderful. She held our hands through the whole process, and was as fun to work with as she was knowledgable and connected. She consistently went above and beyond to help us. I\u2019d recommend her to anyone who wants to buy or sell a home in Santa Rosa and the surrounding area.</p><p>I\u2019m not sure what I\u2019ll end up doing with <a href=\"https://5405spainave.com\">the indieweb website that I made for the house</a>. Probably I\u2019ll just let the domain expire. <a href=\"https://web.archive.org/web/20220000000000*/5405spainave.com\">Here it is, archived for posterity on the Internet Archive.</a></p>" }, "author": { "type": "card", "name": "Ben Werdm\u00fcller", "url": "https://werd.io/profile/benwerd", "photo": "https://werd.io/file/5d388c5fb16ea14aac640912/thumb.jpg" }, "post-type": "article", "_id": "30985833", "_source": "191", "_is_read": true }
I really like this experiment that Jim is conducting on his own site. I might try to replicate it sometime!
{ "type": "entry", "published": "2022-08-22T10:08:45Z", "url": "https://adactio.com/links/19387", "category": [ "links", "linking", "hyperlinks", "hypertext", "well-known", "indieweb", "personal", "publishing" ], "bookmark-of": [ "https://blog.jim-nielsen.com/2022/well-known-links-resource/" ], "content": { "text": "A Well-Known Links Resource - Jim Nielsen\u2019s Blog\n\n\n\nI really like this experiment that Jim is conducting on his own site. I might try to replicate it sometime!", "html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://blog.jim-nielsen.com/2022/well-known-links-resource/\">\nA Well-Known Links Resource - Jim Nielsen\u2019s Blog\n</a>\n</h3>\n\n<p>I really like this experiment that Jim is conducting on his own site. I might try to replicate it sometime!</p>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "bookmark", "_id": "30971520", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2022-08-21T16:25:37+00:00", "url": "https://werd.io/2022/a-home-on-the-web-revisited", "name": "A home on the web, revisited", "content": { "text": "I\u2019ve been thinking a lot about redesigning my website, or even moving platforms. That\u2019s a bit of an emotional decision, because my website runs on Known, a codebase I mostly wrote myself, and started while I was taking care of my mother post-lung-transplant. It\u2019s the reason I\u2019m connected to the indieweb community, and the Matter community, and a lot of people I care deeply about. All those things are separable from this codebase now, but it got me there, and I\u2019m hugely grateful for that.The design is looking a little long in the tooth: I can make tweaks, and would commit them upstream into the open source project for other people to use, but I think there\u2019s something to be said for starting again completely, knowing what I know now.If I had unlimited time and energy - which, sadly isn\u2019t my situation; time and energy are both in very short supply right now - I\u2019d rebuild Known in something like Node, with a cleaner codebase. For now, I think I\u2019ll live with it, and clean what I can.Incidentally, I also cleaned up my public Obsidian site at werd.cloud. I intend to do more with non-linear, unbloggy writing there.", "html": "<p>I\u2019ve been thinking a lot about redesigning my website, or even moving platforms. That\u2019s a bit of an emotional decision, because my website runs on <a href=\"https://withknown.com\">Known</a>, a codebase I mostly wrote myself, and started while I was taking care of my mother post-lung-transplant. It\u2019s the reason I\u2019m connected to the indieweb community, and the Matter community, and a lot of people I care deeply about. All those things are separable from this codebase now, but it got me there, and I\u2019m hugely grateful for that.</p><p>The design is looking a little long in the tooth: I can make tweaks, and would commit them upstream into the open source project for other people to use, but I think there\u2019s something to be said for starting again completely, knowing what I know now.</p><p>If I had unlimited time and energy - which, sadly isn\u2019t my situation; time and energy are both in very short supply right now - I\u2019d rebuild Known in something like Node, with a cleaner codebase. For now, I think I\u2019ll live with it, and clean what I can.</p><p>Incidentally, I also cleaned up my public Obsidian site at <a href=\"https://werd.cloud\">werd.cloud</a>. I intend to do more with non-linear, unbloggy writing there.</p>" }, "author": { "type": "card", "name": "Ben Werdm\u00fcller", "url": "https://werd.io/profile/benwerd", "photo": "https://werd.io/file/5d388c5fb16ea14aac640912/thumb.jpg" }, "post-type": "article", "_id": "30961484", "_source": "191", "_is_read": true }
{ "type": "entry", "published": "2022-08-16T14:26:24Z", "url": "https://adactio.com/journal/19370", "category": [ "nocode", "democratisation", "indieweb", "blockprotocol", "frontend", "development", "tools", "personal", "publishing", "spring83", "protocols", "squarespace", "wix", "wordpress", "bubble", "webflow", "carrd", "hotglue", "neocities", "creativity", "professionalism" ], "syndication": [ "https://adactio.medium.com/b357297437ef" ], "name": "No code", "content": { "text": "When I wrote about democratising dev, I made brief mention of the growing \u201cno code\u201d movement:\n\n\n Personally, I would love it if the process of making websites could be democratised more. I\u2019ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I\u2019m all in favour of no-code tools \u2026in theory.\n\n\nBut I didn\u2019t describe what no-code is, as I understand it.\n\nI\u2019m taking the term at face value to mean a mechanism for creating a website\u2014preferably on a domain you control\u2014without having to write anything in HTML, CSS, JavaScript, or any back-end programming language.\n\nBy that definition, something like WordPress.com (as opposed to WordPress itself) is a no-code tool:\n\n\n Create any kind of website. No code, no manuals, no limits.\n\n\nI\u2019d also put Squarespace in the same category:\n\n\n Start with a flexible template, then customize to fit your style and professional needs with our website builder.\n\n\nAnd its competitor, Wix:\n\n\n Discover the platform that gives you the freedom to create, design, manage and develop your web presence exactly the way you want.\n\n\nWebflow provides the same kind of service, but with a heavy emphasis on marketing websites:\n\n\n Your website should be a marketing asset, not an engineering challenge.\n\n\nBubble is trying to cover a broader base:\n\n\n Bubble lets you create interactive, multi-user apps for desktop and mobile web browsers, including all the features you need to build a site like Facebook or Airbnb.\n\n\nWheras Carrd opts for a minimalist one-page approach:\n\n\n Simple, free, fully responsive one-page sites for pretty much anything.\n\n\nAll of those tools emphasise that don\u2019t need to need to know how to code in order to have a professional-looking website. But there\u2019s a parallel universe of more niche no-code tools where the emphasis is on creativity and self-expression instead of slickness and profressionalism.\n\nneocities.org:\n\n\n Create your own free website. Unlimited creativity, zero ads.\n\n\nmmm.page:\n\n\n Make a website in 5 minutes. Messy encouraged.\n\n\nhotglue.me:\n\n\n unique tool for web publishing & internet samizdat\n\n\nI\u2019m kind of fascinated by these two different approaches: professional vs. expressionist.\n\nI\u2019ve seen people grapple with this question when they decide to have their own website. Should it be a showcase of your achievements, almost like a portfolio? Or should it be a glorious mess of imagery and poetry to reflect your creativity? Could it be both? (Is that even doable? Or desirable?)\n\nRobin Sloan recently published his ideas\u2014and specs\u2014for a new internet protocol called Spring \u201983:\n\n\n Spring \u201883 is a protocol for the transmission and display of something I am calling a \u201cboard\u201d, which is an HTML fragment, limited to 2217 bytes, unable to execute JavaScript or load external resources, but otherwise unrestricted. Boards invite publishers to use all the richness of modern HTML and CSS. Plain text and blue links are also enthusiastically supported.\n\n\nIt\u2019s not a no-code tool (you need to publish in HTML), although someone could easily provide a no-code tool to sit on top of the protocol. Conceptually though, it feels like it\u2019s an a similar space to the chaotic good of neocities.org, mmm.page, and hotglue.me with maybe a bit of tilde.town thrown in.\n\nIt feels like something might be in the air. With Spring \u201983, the Block protocol, and other experiments, people are creating some interesting small pieces that could potentially be loosely joined. No code required.", "html": "<p>When I wrote about <a href=\"https://adactio.com/journal/19356\">democratising dev</a>, I made brief mention of the growing \u201cno code\u201d movement:</p>\n\n<blockquote>\n <p>Personally, I would love it if the process of making websites could be democratised more. I\u2019ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I\u2019m all in favour of no-code tools \u2026in theory.</p>\n</blockquote>\n\n<p>But I didn\u2019t describe what no-code is, as I understand it.</p>\n\n<p>I\u2019m taking the term at face value to mean a mechanism for creating a website\u2014preferably on a domain you control\u2014without having to write anything in HTML, CSS, JavaScript, or any back-end programming language.</p>\n\n<p>By that definition, something like <a href=\"https://wordpress.com/\">WordPress.com</a> (as opposed to WordPress itself) is a no-code tool:</p>\n\n<blockquote>\n <p>Create any kind of website. No code, no manuals, no limits.</p>\n</blockquote>\n\n<p>I\u2019d also put <a href=\"https://www.squarespace.com/\">Squarespace</a> in the same category:</p>\n\n<blockquote>\n <p>Start with a flexible template, then customize to fit your style and professional needs with our website builder.</p>\n</blockquote>\n\n<p>And its competitor, <a href=\"https://www.wix.com/\">Wix</a>:</p>\n\n<blockquote>\n <p>Discover the platform that gives you the freedom to create, design, manage and develop your web presence exactly the way you want.</p>\n</blockquote>\n\n<p><a href=\"https://webflow.com/\">Webflow</a> provides the same kind of service, but with a heavy emphasis on marketing websites:</p>\n\n<blockquote>\n <p>Your website should be a marketing asset, not an engineering challenge.</p>\n</blockquote>\n\n<p><a href=\"https://bubble.io/\">Bubble</a> is trying to cover a broader base:</p>\n\n<blockquote>\n <p>Bubble lets you create interactive, multi-user apps for desktop and mobile web browsers, including all the features you need to build a site like Facebook or Airbnb.</p>\n</blockquote>\n\n<p>Wheras <a href=\"https://carrd.co/\">Carrd</a> opts for a minimalist one-page approach:</p>\n\n<blockquote>\n <p>Simple, free, fully responsive one-page sites for pretty much anything.</p>\n</blockquote>\n\n<p>All of those tools emphasise that don\u2019t need to need to know how to code in order to have a professional-looking website. But there\u2019s a parallel universe of more niche no-code tools where the emphasis is on creativity and self-expression instead of slickness and profressionalism.</p>\n\n<p><a href=\"https://neocities.org/\">neocities.org</a>:</p>\n\n<blockquote>\n <p>Create your own free website. Unlimited creativity, zero ads.</p>\n</blockquote>\n\n<p><a href=\"https://mmm.page/\">mmm.page</a>:</p>\n\n<blockquote>\n <p>Make a website in 5 minutes. Messy encouraged.</p>\n</blockquote>\n\n<p><a href=\"https://hotglue.me/\">hotglue.me</a>:</p>\n\n<blockquote>\n <p>unique tool for web publishing & internet samizdat</p>\n</blockquote>\n\n<p>I\u2019m kind of fascinated by these two different approaches: professional vs. expressionist.</p>\n\n<p>I\u2019ve seen people grapple with this question when they decide to have their own website. Should it be a showcase of your achievements, almost like a portfolio? Or should it be a glorious mess of imagery and poetry to reflect your creativity? Could it be both? (Is that even doable? Or desirable?)</p>\n\n<p>Robin Sloan recently published his ideas\u2014and specs\u2014for a new internet protocol called <a href=\"https://www.robinsloan.com/lab/specifying-spring-83/\">Spring \u201983</a>:</p>\n\n<blockquote>\n <p>Spring \u201883 is a protocol for the transmission and display of something I am calling a \u201cboard\u201d, which is an HTML fragment, limited to 2217 bytes, unable to execute JavaScript or load external resources, but otherwise unrestricted. Boards invite publishers to use all the richness of modern HTML and CSS. Plain text and blue links are also enthusiastically supported.</p>\n</blockquote>\n\n<p>It\u2019s not a no-code tool (you need to publish in HTML), although someone could easily provide a no-code tool to sit on top of the protocol. Conceptually though, it feels like it\u2019s an a similar space to the chaotic good of <a href=\"https://neocities.org/\">neocities.org</a>, <a href=\"https://mmm.page/\">mmm.page</a>, and <a href=\"https://hotglue.me/\">hotglue.me</a> with maybe a bit of <a href=\"https://tilde.town/\">tilde.town</a> thrown in.</p>\n\n<p>It feels like something might be in the air. With <a href=\"https://www.robinsloan.com/lab/specifying-spring-83/\">Spring \u201983</a>, <a href=\"https://blockprotocol.org/\">the Block protocol</a>, and other experiments, people are creating some interesting small pieces that could potentially be <a href=\"https://www.smallpieces.com/\">loosely joined</a>. No code required.</p>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "note", "_id": "30874653", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2022-08-10T16:11:54Z", "url": "https://adactio.com/journal/19356", "category": [ "democratisation", "frontend", "development", "indieweb", "hosting", "blockprotocol", "patterns", "nocode", "publishing", "talks", "presentations" ], "syndication": [ "https://adactio.medium.com/db9ceab2156e" ], "name": "Democratising dev", "content": { "text": "I met up with a supersmart programmer friend of mine a little while back. He was describing some work he was doing with React. He was joining up React components. There wasn\u2019t really any problem-solving or debugging\u2014the individual components had already been thoroughly tested. He said it felt more like construction than programming.\n\nMy immediate thought was \u201cthat should be automated.\u201d\n\nOr at the very least, there should be some way for just about anyone to join those pieces together rather than it requiring a supersmart programmer\u2019s time. After all, isn\u2019t that the promise of design systems and components\u2014freeing us up to tackle the meaty problems instead of spending time on the plumbing?\n\nI thought about that conversation when I was listening to Laurie\u2019s excellent talk in Berlin last month.\n\nChatting to Laurie before the talk, he was very nervous about the conclusion that he had reached and was going to share: that the time is right for web development to be automated. He figured it would be an unpopular message. Heck, even he didn\u2019t like it.\n\nBut I reminded him that it\u2019s as old as the web itself. I\u2019ve seen videos from very early World Wide Web conferences where Tim Berners-Lee was railing against the idea that anyone would write HTML by hand. The whole point of his WorldWideWeb app was that anyone could create and edit web pages as easily as word processing documents. It\u2019s almost an accident of history that HTML happened to be just easy enough\u2014but also just powerful enough\u2014for many people to learn and use.\n\nAnyway, I thoroughly enjoyed Laurie\u2019s talk. (Except for a weird bit where he dunks on people moaning about \u201cthe fundamentals\u201d. I think it\u2019s supposed to be punching up, but I\u2019m not sure that\u2019s how it came across. As Chris points out, fundamentals matter \u2026at least when it comes to concepts like accessibility and performance. I think Laurie was trying to dunk on people moaning about fundamental technologies like languages and frameworks. Perhaps the message got muddled in the delivery.)\n\nI guess Laurie was kind of talking about this whole \u201cno code\u201d thing that\u2019s quite hot right now. Personally, I would love it if the process of making websites could be democratised more. I\u2019ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I\u2019m all in favour of no-code tools \u2026in theory.\n\nThe problem is that unless they work 100%, and always produce good accessible performant code, then they\u2019re going to be another example of the law of leaky abstractions. If a no-code tool can get someone 90% of the way to what they want, that seems pretty good. But if that person than has to spend an inordinate amount of time on the remaining 10% then all the good work of the no-code tool is somewhat wasted.\n\nFunnily enough, the person who coined that law, Joel Spolsky, spoke right after Laurie in Berlin. The two talks made for a good double bill.\n\n(I would link to Joel\u2019s talk but for some reason the conference is marking the YouTube videos as unlisted. If you manage to track down a URL for the video of Joel\u2019s talk, let me know and I\u2019ll update this post.)\n\nIn a way, Joel was making the same point as Laurie: why is it still so hard to do something on the web that feels like it should be easily repeatable?\n\nHe used the example of putting an event online. Right now, the most convenient way to do it is to use a third-party centralised silo like Facebook. It works, but now the business model of Facebook comes along for the ride. Your event is now something to be tracked and monetised by advertisers.\n\nYou could try doing it yourself, but this is where you\u2019ll run into the frustrations shared by Joel and Laurie. It\u2019s still too damn hard and complicated (even though we\u2019ve had years and years of putting events online). Despite what web developers tell themselves, making stuff for the web shouldn\u2019t be that complicated. As Trys put it:\n\n\n We kid ourselves into thinking we\u2019re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain\u2019t much more to it than that.\n\n\nAnd yet here we are. You can either have the convenience of putting something on a silo like Facebook, or you can have the freedom of doing it yourself, indie web style. But you can\u2019t have both it seems.\n\nThis is a criticism often levelled at the indie web. The barrier to entry to having your own website is too high. It\u2019s a valid criticism. To have your own website, you need to have some working knowledge of web hosting and at least some web technologies (like HTML).\n\nDon\u2019t get me wrong. I love having my own website. Like, I really love it. But I\u2019m also well aware that it doesn\u2019t scale. It\u2019s unreasonable to expect someone to learn new skills just to make a web page about, say, an event they want to publicise.\n\nThat\u2019s kind of the backstory to the project that Joel wanted to talk about: the block protocol. (Note: it has absolutely nothing to do with blockchain\u2014it\u2019s just an unfortunate naming collision.)\n\nThe idea behind the project is to create a kind of crowdsourced pattern library\u2014user interfaces for creating common structures like events, photos, tables, and lists. These patterns already exist in today\u2019s silos and content management systems, but everyone is reinventing the wheel independently. The goal of this project is make these patterns interoperable, and therefore portable.\n\nAt first I thought that would be a classic /927 situation, but I\u2019m pleased to see that the focus of the project is not on formats (we\u2019ve been there and done that with microformats, RDF, schema.org, yada yada). The patterns might end up being web components or they might not. But the focus is on the interface. I think that\u2019s a good approach.\n\nThat approach chimes nicely with one of the principles of the indie web:\n\n\n UX and design is more important than protocols, formats, data models, schema etc. We focus on UX first, and then as we figure that out we build/develop/subset the absolutely simplest, easiest, and most minimal protocols and formats sufficient to support that UX, and nothing more. AKA UX before plumbing.\n\n\nThat said, I don\u2019t think this project is a cure-all. Interoperable (portable) chunks of structured content would be great, but that\u2019s just one part of the challenge of scaling the indie web. You also need to have somewhere to put those blocks.\n\nConvenience isn\u2019t the only thing you get from using a silo like Facebook, Twitter, Instagram, or Medium. You also get \u201cfree\u201d hosting \u2026until you don\u2019t (see GeoCities, MySpace, and many, many more).\n\nWouldn\u2019t it be great if everyone had a place on the web that they could truly call their own? Today you need to have an uneccesary degree of technical understanding to publish something at a URL you control.\n\nI\u2019d love to see that challenge getting tackled.", "html": "<p>I met up with a supersmart programmer friend of mine a little while back. He was describing some work he was doing with React. He was joining up React components. There wasn\u2019t really any problem-solving or debugging\u2014the individual components had already been thoroughly tested. He said it felt more like construction than programming.</p>\n\n<p>My immediate thought was \u201cthat should be automated.\u201d</p>\n\n<p>Or at the very least, there should be some way for just about anyone to join those pieces together rather than it requiring a supersmart programmer\u2019s time. After all, isn\u2019t that the promise of design systems and components\u2014freeing us up to tackle the meaty problems instead of spending time on the plumbing?</p>\n\n<p>I thought about that conversation when I was listening to <a href=\"https://www.youtube.com/watch?v=hWjT_OOBdOc\">Laurie\u2019s excellent talk in Berlin last month</a>.</p>\n\n<p>Chatting to Laurie before the talk, he was very nervous about the conclusion that he had reached and was going to share: that the time is right for web development to be automated. He figured it would be an unpopular message. Heck, even <em>he</em> didn\u2019t like it.</p>\n\n<p>But I reminded him that it\u2019s as old as the web itself. I\u2019ve seen videos from very early World Wide Web conferences where Tim Berners-Lee was railing against the idea that anyone would write HTML by hand. The whole point of <a href=\"https://worldwideweb30.com/\">his WorldWideWeb app</a> was that anyone could create and edit web pages as easily as word processing documents. It\u2019s almost an accident of history that HTML happened to be just easy enough\u2014but also just powerful enough\u2014for many people to learn and use.</p>\n\n<p>Anyway, I thoroughly enjoyed Laurie\u2019s talk. (Except for a weird bit where he dunks on people moaning about \u201cthe fundamentals\u201d. I think it\u2019s supposed to be punching up, but I\u2019m not sure that\u2019s how it came across. As Chris points out, <a href=\"https://gomakethings.com/fundamentals-matter/\">fundamentals matter</a> \u2026at least when it comes to <em>concepts</em> like accessibility and performance. I think Laurie was trying to dunk on people moaning about fundamental <em>technologies</em> like languages and frameworks. Perhaps the message got muddled in the delivery.)</p>\n\n<p>I guess Laurie was kind of talking about this whole \u201cno code\u201d thing that\u2019s quite hot right now. Personally, I would love it if the process of making websites could be democratised more. I\u2019ve often said that my nightmare scenario for the World Wide Web would be for its fate to lie in the hands of an elite priesthood of programmers with computer science degrees. So I\u2019m all in favour of no-code tools \u2026in theory.</p>\n\n<p>The problem is that unless they work 100%, and always produce good accessible performant code, then they\u2019re going to be another example of <a href=\"https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/\">the law of leaky abstractions</a>. If a no-code tool can get someone 90% of the way to what they want, that seems pretty good. But if that person than has to spend an inordinate amount of time on the remaining 10% then all the good work of the no-code tool is somewhat wasted.</p>\n\n<p>Funnily enough, the person who coined that law, Joel Spolsky, spoke right after Laurie in Berlin. The two talks made for a good double bill.</p>\n\n<p>(I would link to Joel\u2019s talk but for some reason the conference is marking the YouTube videos as unlisted. If you manage to track down a URL for the video of Joel\u2019s talk, let me know and I\u2019ll update this post.)</p>\n\n<p>In a way, Joel was making the same point as Laurie: why is it still so hard to do something on the web that feels like it should be easily repeatable?</p>\n\n<p>He used the example of putting an event online. Right now, the most convenient way to do it is to use a third-party centralised silo like Facebook. It works, but now the business model of Facebook comes along for the ride. Your event is now something to be tracked and monetised by advertisers.</p>\n\n<p>You could try doing it yourself, but this is where you\u2019ll run into the frustrations shared by Joel and Laurie. It\u2019s still too damn hard and complicated (even though we\u2019ve had years and years of putting events online). Despite what web developers tell themselves, making stuff for the web shouldn\u2019t be that complicated. <a href=\"https://www.trysmudford.com/blog/city-life/\">As Trys put it</a>:</p>\n\n<blockquote>\n <p>We kid ourselves into thinking we\u2019re building groundbreakingly complex systems that require bleeding-edge tools, but in reality, much of what we build is a way to render two things: a list, and a single item. Here are some users, here is a user. Here are your contacts, here are your messages with that contact. There ain\u2019t much more to it than that.</p>\n</blockquote>\n\n<p>And yet here we are. You can either have the convenience of putting something on a silo like Facebook, or you can have the freedom of doing it yourself, <a href=\"https://indieweb.org/\">indie web</a> style. But you can\u2019t have both it seems.</p>\n\n<p>This is a criticism often levelled at <a href=\"https://indieweb.org/\">the indie web</a>. The barrier to entry to having your own website is too high. It\u2019s a valid criticism. To have your own website, you need to have some working knowledge of web hosting and at least some web technologies (like HTML).</p>\n\n<p>Don\u2019t get me wrong. I love having my own website. Like, I <em>really</em> love it. But I\u2019m also well aware that it doesn\u2019t scale. It\u2019s unreasonable to expect someone to learn new skills just to make a web page about, say, an event they want to publicise.</p>\n\n<p>That\u2019s kind of the backstory to the project that Joel wanted to talk about: <a href=\"https://blockprotocol.org/\">the block protocol</a>. (Note: it has absolutely nothing to do with block<em>chain</em>\u2014it\u2019s just an unfortunate naming collision.)</p>\n\n<p>The idea behind the project is to create a kind of crowdsourced pattern library\u2014user interfaces for creating common structures like events, photos, tables, and lists. These patterns already exist in today\u2019s silos and content management systems, but everyone is reinventing the wheel independently. The goal of this project is make these patterns interoperable, and therefore portable.</p>\n\n<p>At first I thought that would be <a href=\"https://xkcd.com/927/\">a classic <code>/927</code> situation</a>, but I\u2019m pleased to see that the focus of the project is <em>not</em> on formats (we\u2019ve been there and done that with microformats, RDF, schema.org, yada yada). The patterns might end up being web components or they might not. But the focus is on the <em>interface</em>. I think that\u2019s a good approach.</p>\n\n<p>That approach chimes nicely with one of <a href=\"https://indieweb.org/principles\">the principles of the indie web</a>:</p>\n\n<blockquote>\n <p>UX and design is more important than protocols, formats, data models, schema etc. We focus on UX first, and then as we figure that out we build/develop/subset the absolutely simplest, easiest, and most minimal protocols and formats sufficient to support that UX, and nothing more. AKA UX before plumbing.</p>\n</blockquote>\n\n<p>That said, I don\u2019t think this project is a cure-all. Interoperable (portable) chunks of structured content would be great, but that\u2019s just one part of the challenge of scaling the indie web. You also need to have somewhere to put those blocks.</p>\n\n<p>Convenience isn\u2019t the only thing you get from using a silo like Facebook, Twitter, Instagram, or Medium. You also get \u201cfree\u201d hosting \u2026until you don\u2019t (see GeoCities, MySpace, and <a href=\"https://indieweb.org/site-deaths\">many, many more</a>).</p>\n\n<p>Wouldn\u2019t it be great if everyone had a place on the web that they could truly call their own? Today you need to have an uneccesary degree of technical understanding to publish something at a URL you control.</p>\n\n<p>I\u2019d love to see that challenge getting tackled.</p>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "article", "_id": "30772785", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2022-08-02T20:38:43+00:00", "url": "https://werd.io/2022/comments-are-hard", "name": "Comments are hard", "content": { "text": "Building a comments system is really hard. I tried to build one for Known, which powers my website, but found that spammers circumvented it surprisingly easily. You can flag spam using Akismet (which was built for WordPress but works across platforms), but this process tends to require you to pre-screen comments and make them public after the fact. That\u2019s a fair amount of work and a fair amount of unnecessary friction for building community.If you have a blog - you do have a blog, don\u2019t you? - you can post a response to one of my posts and send a webmention. But not everybody has their own website, and the barrier to entry for sending webmentions is pretty high.So I\u2019ve been looking for something else.Fred Wilson gave up on comments and asks people to discuss on Twitter. That works pretty well, but I\u2019m not really into forcing people to use a particular service. That\u2019s also why I\u2019m not particularly into using Disqus embeds, which also unnecessarily track you across sites. Finally, I was using Cactus Comments, which is based on the decentralized Matrix network for a while, but it occasionally seemed to break in ways that were disconcerting for site visitors. (It\u2019s still a very cool project.)I love comments, and I guess that means I\u2019m writing my own system again. To do so means getting into an arms race with spammers, which I\u2019m not very excited about, but I don\u2019t see an alternative that I\u2019m completely happy about.Do you run a blog with comments? How do you deal with these issues? I\u2019d love to learn from you.", "html": "<p>Building a comments system is really hard. I tried to build one for <a href=\"https://withknown.com\">Known</a>, which powers my website, but found that spammers circumvented it surprisingly easily. You can flag spam using <a href=\"https://akismet.com/\">Akismet</a> (which was built for WordPress but works across platforms), but this process tends to require you to pre-screen comments and make them public after the fact. That\u2019s a fair amount of work and a fair amount of unnecessary friction for building community.</p><p>If you have a blog - you <em>do</em> have a blog, don\u2019t you? - you can post a response to one of my posts and send a <a href=\"https://indieweb.org/Webmention\">webmention</a>. But not everybody has their own website, and the barrier to entry for sending webmentions <a href=\"https://jamesmead.org/blog/2020-10-13-sending-webmentions-from-a-static-website\">is pretty high</a>.</p><p>So I\u2019ve been looking for something else.</p><p><a href=\"https://avc.com/\">Fred Wilson gave up on comments and asks people to discuss on Twitter</a>. That works pretty well, but I\u2019m not really into forcing people to use a particular service. That\u2019s also why I\u2019m not particularly into using <a href=\"https://disqus.com/\">Disqus</a> embeds, which also <a href=\"https://fatfrogmedia.com/delete-disqus-comments-wordpress/\">unnecessarily track you across sites</a>. Finally, I was using <a href=\"https://cactus.chat/\">Cactus Comments</a>, which is based on the decentralized <a href=\"https://matrix.org/\">Matrix network</a> for a while, but it occasionally seemed to break in ways that were disconcerting for site visitors. (It\u2019s still a very cool project.)</p><p>I love comments, and I guess that means I\u2019m writing my own system again. To do so means getting into an arms race with spammers, which I\u2019m not very excited about, but I don\u2019t see an alternative that I\u2019m completely happy about.</p><p>Do you run a blog with comments? How do you deal with these issues? I\u2019d love to learn from you.</p>" }, "author": { "type": "card", "name": "Ben Werdm\u00fcller", "url": "https://werd.io/profile/benwerd", "photo": "https://werd.io/file/5d388c5fb16ea14aac640912/thumb.jpg" }, "post-type": "article", "_id": "30628125", "_source": "191", "_is_read": true }
I have days were I can write a well researched blog post in a few hours. And I have days were I don’t feel like writing. Or I want to add one more thing but don’t know how to speak my mind. So this is a reminder to myself: just hit publish.
{ "type": "entry", "published": "2022-08-02T15:20:05Z", "url": "https://adactio.com/links/19343", "category": [ "indieweb", "personal", "publishing", "writing", "sharing", "blogging", "blogs" ], "bookmark-of": [ "https://marcoheine.com/blog/just-hit-publish/" ], "content": { "text": "Just hit publish | Marco Heine - Freelance Web Developer\n\n\n\n\n I have days were I can write a well researched blog post in a few hours. And I have days were I don\u2019t feel like writing. Or I want to add one more thing but don\u2019t know how to speak my mind. So this is a reminder to myself: just hit publish.", "html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://marcoheine.com/blog/just-hit-publish/\">\nJust hit publish | Marco Heine - Freelance Web Developer\n</a>\n</h3>\n\n<blockquote>\n <p>I have days were I can write a well researched blog post in a few hours. And I have days were I don\u2019t feel like writing. Or I want to add one more thing but don\u2019t know how to speak my mind. So this is a reminder to myself: <strong>just hit publish</strong>.</p>\n</blockquote>" }, "author": { "type": "card", "name": "Jeremy Keith", "url": "https://adactio.com/", "photo": "https://adactio.com/images/photo-150.jpg" }, "post-type": "bookmark", "_id": "30623598", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2022-08-01T18:07:32+00:00", "url": "https://werd.io/2022/building-an-inclusive-independent-open-newsroom", "name": "Building an inclusive, independent, open newsroom", "content": { "text": "I didn\u2019t make a big announcement about it, but for the last few months I\u2019ve been working as the CTO at The 19th, a nonprofit newsroom that reports on stories at the intersection of gender, politics, and policy.It was a necessary move for me: I needed stronger work/life balance for my own health, and I also wanted to feel like I was helping in the midst of a very tumultuous social and political climate. It was also a move back into the core ideas my career has been built on.The 19th was launched in January 2020 by veterans of the Texas Tribune and ProPublica who understood the need to report stories from a more diverse perspective than is normally offered by an industry still dominated by white men. I\u2019ve been following it from the beginning as a prominent subscription in my RSS reader, and was deeply impressed by the detailed, empathetic, unsensational reporting.The 19th\u2019s technical platform is largely based on self-hosted WordPress, with some interesting theme modifications that allow for visualizations and in-page interactivity. (Did I immediately add simple microformats support to articles as soon as I arrived? Yes, I did.) Importantly for me, the team cares about the same privacy issues I do: particularly in an environment where abortion-related surveillance is becoming a safety issue, dealing with audience data intentionally is crucial.Openness is core to what The 19th is. Its financial backers are published in full, so you know exactly whose is bankrolling the non-profit. Since the beginning, the newsroom has also made its content available via a Creative Commons license that allows anyone else to republish it for free. Those partners have included the Guardian, USA Today, Teen Vogue, PBS NewsHour, Ms. Magazine, RawStory, and many more. It could be you, too, if you wanted to: you can find the full HTML source to republish on every article page. Because The 19th\u2019s newsroom is more diverse, every republished article furthers its mission of improving representation in the news media overall.It\u2019s an obvious extension to this strategy to make our technology available as well, via a permissive open source license. That\u2019s my ambition: to package up some of our supporting tools and make them available in a way that other newsrooms can take advantage of. If they have the technical capability to collaborate on building them, great; if not, they can still pick up the technology and use them. Open source itself has a giant diversity problem, and if we can apply an equity lens to building our technical community in the same way we build our journalistic ecosystem, perhaps we can be a part of the solution there, too.I\u2019ve long been a member of the indieweb community, which encourages everyone to own and control their own website and domain. Both technically and ideologically, the overlaps with news are obvious: every newsroom must own its relationship with its audience in order to build trust, understand their needs, and above all to build community. Trends on the web have been in the opposite direction for most of the last decade: social media platforms like Facebook seek to intermediate and monetize that relationship, stripping newsrooms of resources and undermining the ability of voters to receive information in the process. Building an independent website for representative news content and community, and then helping others to do the same, is an important mission.Right now it\u2019s a very small team: Abby Blachman and me. I\u2019m looking for a third member of the technology team to help with everything I\u2019ve discussed.And so far, it\u2019s been joyful. Abby is amazing; everyone is. I\u2019ve never been part of an organization - least of all a remote team - that understands the need for a supportive culture so clearly. As an organization, it continues to listen and evolve. The people team - led by Jayo Miko Macasaquit - has put procedures and benefits in place that I haven\u2019t seen in organizations ten times the size. To build representative, empathetic news, you first need to build a representative, empathetic organization, and that\u2019s what\u2019s happening here. I hope they do more to tell their story and share what they\u2019re doing, because it\u2019s genuinely phenomenal.I can\u2019t believe my luck; it\u2019s a real privilege to be on this team. I want to be a good ambassador: although I knew about the journalism, which should always be front and center, I wasn\u2019t as familiar with the organization\u2019s ecosystem and openness chops before I joined. It was the nicest of surprises, and I want to tell you more about it. We don\u2019t have an internal blog right now, so from time to time I\u2019ll discuss what we\u2019ve been working on over here.I\u2019m also working on building some tools of my own to support my management process; the first is all about building a consistent culture of transparent feedback. More on that when I\u2019m ready.In the meantime, if you have any questions, I\u2019d love to answer them. And if you happen to be interested in our technology position, you should definitely apply.", "html": "<p><a href=\"https://19thnews.org\"><img src=\"https://werd.io/file/62e8178716d1b4005910d4b2/thumb.png\" alt=\"\" width=\"1024\" height=\"462\" /></a></p><p>I didn\u2019t make a big announcement about it, but for the last few months I\u2019ve been working as the CTO at <a href=\"https://19thnews.org\">The 19th</a>, a nonprofit newsroom that reports on stories at the intersection of gender, politics, and policy.</p><p>It was a necessary move for me: I needed stronger work/life balance for my own health, and I also wanted to feel like I was helping in the midst of a very tumultuous social and political climate. It was also a move back into the core ideas my career has been built on.</p><p><a href=\"https://www.washingtonpost.com/lifestyle/new-media-outlet-covering-the-intersection-of-women-and-politics-launches-as-2020-election-kicks-off/2020/01/25/34c2a2ac-3ee9-11ea-baca-eb7ace0a3455_story.html\">The 19th was launched in January 2020</a> by veterans of the Texas Tribune and ProPublica who understood the need to report stories from a more diverse perspective than is normally offered by an industry <a href=\"https://womensmediacenter.com/news-features/why-white-male-dominance-of-news-media-is-so-persistent\">still dominated by white men</a>. I\u2019ve been following it from the beginning as a prominent subscription in my RSS reader, and was deeply impressed by the detailed, empathetic, unsensational reporting.</p><p>The 19th\u2019s technical platform is largely based on self-hosted WordPress, with some interesting theme modifications that allow for visualizations and in-page interactivity. (Did I immediately add <a href=\"https://indieweb.org/microformats\">simple microformats support</a> to articles as soon as I arrived? Yes, I did.) Importantly for me, the team cares about the same privacy issues I do: particularly in an environment where <a href=\"https://www.pbs.org/newshour/economy/why-some-fear-that-big-tech-data-could-become-a-tool-for-abortion-surveillance\">abortion-related surveillance is becoming a safety issue</a>, dealing with audience data intentionally is crucial.</p><p>Openness is core to what The 19th is. Its financial backers <a href=\"https://19thnews.org/membership/\">are published in full</a>, so you know exactly whose is bankrolling the non-profit. Since the beginning, the newsroom has also made its content available <a href=\"https://19thnews.org/republishing-guidelines/\">via a Creative Commons license</a> that allows anyone else to republish it for free. Those partners have included the Guardian, USA Today, Teen Vogue, PBS NewsHour, Ms. Magazine, RawStory, and many more. It could be you, too, if you wanted to: you can find the full HTML source to republish on every article page. Because The 19th\u2019s newsroom <a href=\"https://19thnews.org/team/\">is more diverse</a>, every republished article furthers its mission of improving representation in the news media overall.</p><p>It\u2019s an obvious extension to this strategy to make our <em>technology</em> available as well, via a permissive open source license. That\u2019s my ambition: to package up some of our supporting tools and make them available in a way that other newsrooms can take advantage of. If they have the technical capability to collaborate on building them, great; if not, they can still pick up the technology and use them. <a href=\"https://en.wikipedia.org/wiki/Diversity_in_open-source_software\">Open source itself has a giant diversity problem</a>, and if we can apply an equity lens to building our technical community in the same way we build our journalistic ecosystem, perhaps we can be a part of the solution there, too.</p><p>I\u2019ve long been a member of the <a href=\"https://indieweb.org\">indieweb</a> community, which encourages everyone to own and control their own website and domain. Both technically and ideologically, the overlaps with news are obvious: every newsroom must own its relationship with its audience in order to build trust, understand their needs, and above all to build community. Trends on the web have been in the opposite direction for most of the last decade: social media platforms like Facebook seek to intermediate and <em>monetize</em> that relationship, stripping newsrooms of resources and undermining the ability of voters to receive information in the process. Building an independent website for representative news content and community, and then helping others to do the same, is an important mission.</p><p>Right now it\u2019s a very small team: <a href=\"https://twitter.com/abbyblachman\">Abby Blachman</a> and me. I\u2019m looking for <a href=\"https://19thnews.org/19th-news-web-applications-engineer-job-posting/\">a third member of the technology team</a> to help with everything I\u2019ve discussed.</p><p>And so far, it\u2019s been <em>joyful</em>. Abby is amazing; everyone is. I\u2019ve never been part of an organization - least of all a remote team - that understands the need for a supportive culture so clearly. As an organization, it <a href=\"https://www.businessinsider.com/how-the-19th-case-study-survived-pandemic-2021-2\">continues to listen and evolve</a>. The people team - led by <a href=\"https://twitter.com/jayomiko\">Jayo Miko Macasaquit</a> - has put procedures and benefits in place that I haven\u2019t seen in organizations ten times the size. To build representative, empathetic news, you first need to build a representative, empathetic organization, and that\u2019s what\u2019s happening here. I hope they do more to tell their story and share what they\u2019re doing, because it\u2019s genuinely phenomenal.</p><p>I can\u2019t believe my luck; it\u2019s a real privilege to be on this team. I want to be a good ambassador: although I knew about the journalism, which should always be front and center, I wasn\u2019t as familiar with the organization\u2019s ecosystem and openness chops before I joined. It was the nicest of surprises, and I want to tell you more about it. We don\u2019t have an internal blog right now, so from time to time I\u2019ll discuss what we\u2019ve been working on over here.</p><p>I\u2019m also working on building some tools of my own to support my management process; the first is all about building a consistent culture of transparent feedback. More on that when I\u2019m ready.</p><p>In the meantime, if you have any questions, I\u2019d love to answer them. <a href=\"https://19thnews.org/19th-news-web-applications-engineer-job-posting/\">And if you happen to be interested in our technology position, you should definitely apply.</a></p>" }, "author": { "type": "card", "name": "Ben Werdm\u00fcller", "url": "https://werd.io/profile/benwerd", "photo": "https://werd.io/file/5d388c5fb16ea14aac640912/thumb.jpg" }, "post-type": "article", "_id": "30606285", "_source": "191", "_is_read": true }
{ "type": "entry", "published": "2022-08-01T00:12:20+00:00", "url": "https://werd.io/2022/the-quest-for-a-memex", "category": [ "Technology" ], "bookmark-of": [ "https://www.kevinmarks.com/memex.html" ], "name": "The Quest for a Memex", "content": { "text": "\u201cThis made me think about making a new view of a post, where the inbound and outbound links are shown in the margins of the page, and the flow is more dynamic. The inbound links can be found with Webmention, which is already here, but scanning the outbound links and making previews for them is a separate task. It seems related though - if a webmention tool can provide a prevew for inbound lnks, why not for outbound ones too?\u201d #Technology\n [Link]", "html": "<p>\u201cThis made me think about making a new view of a post, where the inbound and outbound links are shown in the margins of the page, and the flow is more dynamic. The inbound links can be found with Webmention, which is already here, but scanning the outbound links and making previews for them is a separate task. It seems related though - if a webmention tool can provide a prevew for inbound lnks, why not for outbound ones too?\u201d <a href=\"https://werd.io/tag/Technology\" class=\"p-category\">#Technology</a></p>\n <p>[<a href=\"https://www.kevinmarks.com/memex.html\">Link</a>]</p>" }, "author": { "type": "card", "name": "Ben Werdm\u00fcller", "url": "https://werd.io/profile/benwerd", "photo": "https://werd.io/file/5d388c5fb16ea14aac640912/thumb.jpg" }, "post-type": "bookmark", "_id": "30592587", "_source": "191", "_is_read": true }
{ "type": "entry", "published": "2022-07-31T14:32:34-0400", "url": "https://martymcgui.re/2022/07/31/switching-costs-for-an-indieauth-server/", "category": [ "site-update", "IndieAuth" ], "name": "Switching costs for an IndieAuth server", "content": { "text": "One of the things I love about building with IndieWeb building blocks is that (sometimes through more work than anticipated) you can swap out pieces of your site without (much) disruption because the seams between building blocks are well specified.\n\nSo, this is me documenting how I replaced my IndieAuth setup to stop leaning on Aaron\u2019s IndieAuth.com (which has been on the verge of retiring any day now for some years).\n\nPlease excuse this long and rambling post. Feel free to skip around!\n\n\n\nWhat is IndieAuth?\n\nAt a high-level, IndieAuth is a way to sign in using your website as an identity.\n\nWithout digging too deeply into the plumbing, you start by updating your website\u2019s homepage with some extra header info that says \u201cmy IndieAuth service is over there\u201d. From there, you can sign into services that support IndieAuth (like the IndieWeb wiki, the social feed reader service Aperture, and more. And you can use your IndieAuth server to protect your own services, such as a Micropub server that can create new posts on your site.\n\nWhy switch?\n\nI\u2019ve been using indieauth.com as my IndieAuth setup since late 2016 because it was easy to set up, because it uses something called RelMeAuth to let me sign in using services I already trust (like GitHub).\n\nHowever, indieauth.com has been growing stale as the IndieAuth spec has evolved. indieauth.com\u2019s maintainer has been discussing replacing it since at least 2017.\n\nThe inciting incident for my switch was looking at OwnCast - a self-hostable video streaming service with attached chatroom. OwnCast\u2019s chat allows using IndieAuth to sign in, which sounded great to me, but OwnCast\u2019s implementation wasn\u2019t expecting indieauth.com\u2019s old-style response format.\n\nWhy set up my own?\n\nThere are a bunch of IndieAuth server implementations listed on the IndieWeb wiki. However: simplest of them (selfauth + mintoken) are now out of date with the spec and haven\u2019t been replaced, yet. Others tend to be built into other CMSes like WordPress. A couple of standalone servers exist but are in languages I am not comfortable working in (hello Rust and Go) or have deployment requirements I wasn\u2019t thrilled about supporting (hello Rails).\n\nI found Taproot/IndieAuth on this page and that looked promising - a PHP library intended to be deployed within a fairly standard PHP web app style (\u201cany PSR-7 compatible app\u201d).\n\nI knew this would be some work but it sounded promising and so I began the week-ish long process of actually writing and deploying that \u201cPSR-7 compatible app\u201d built on taproot/indieauth.\n\ntl;dr say hello to Belding\n\nBelding is an \u201cPSR-7 compatible\u201d PHP web app that provides a standalone IndieAuth endpoint for a single user with a simple password form for authentication.\n\nI would love to go into the process and pitfalls of putting it together, but instead I\u2019ll link to the README where you can learn more about how it works, how to use it, its limitations, etc.\n\nSwitching costs for an IndieAuth server\n\n1. Tell the World\n\nFirst up, you\u2019ll need to update the headers on your site. I switched my authorization_endpoint and token_endpoint to my new server from indieauth.com. Since I\u2019m updating to support the latest spec, I also added the indieauth-metadata header (which should eventually replace the other two).\n\nNow that your site is advertising the new IndieAuth server, you will likely experience logouts or weird access denied reponses everywhere that your site has been used with IndieAuth.\n\n2. Tell your own services\n\nI needed to configure my own \u201crelying apps\u201d so they know to talk to the new server when checking that a request is allowed. This list thankfully wasn\u2019t too long.\n\nMy Micropub server\nMy Micropub media server\nBeyond the effort of getting my server working as an indieauth.com replacement, I also took steps to try and support the latest in the IndieAuth spec. That meant updating these micropub servers to use the new \u201ctoken introspection\u201d feature which has some tighter security requirements.\n\n(Note: I initially made the same change for my self-hosted copy of Aperture, but found it would be too many changes for me to take on at the moment. Instead, I updated by IndieAuth server to allow the older and less secure token verification method used by Aperture.)\n\n3. Sign-in to all the things again \\o|\n\nOnce all my relying apps were all talking to the new IndieAuth server, it was time to re-sign-in to all the things:\n\nThe IndieWeb wiki\n\nMonocle social reader client\n\nQuill Micropub posting client\nOwnYourSwarm\niOS apps\n\nmicro.blog\nIndigenous\n\nManually issue new IndieAuth tokens for automation that uses them:\n\nMy personal YouTube manager\n\nMy command line tool for media uploads\niOS shortcuts like the one I use to post Caturday.\n\nTakeaways\n\nThere are a lot of improvements I\u2019d like to make to Belding, but in general I am happy that it seems to work and, outside of the time to develop the server itself, my website and the tools I use to manage it were only broken for about a day.\n\nI think it\u2019d also be really nice to wrap up Belding a bit so it\u2019s easy to configure and deploy on free-and-cheap platforms like fly.io. I believe it should be easier for folks to spin up and control their own IndieWeb building blocks where possible!\n\nIt\u2019s also become clear to me that there are some user- and developer-experience holes around setting up relying apps. The auth requirements for token introspection, for example, means you need a way to manage access for each \u201cbackend\u201d that you have that relies on IndieAuth to protect itself!\n\nLong story short (too late) I am finally able to sign into OwnCast server chat using my domain. \ud83d\ude02\ud83d\ude05", "html": "<p>One of the things I love about building with <a href=\"https://indieweb.org/Category:building-blocks\">IndieWeb building blocks</a> is that (sometimes through more work than anticipated) you can swap out pieces of your site without (much) disruption because the seams between building blocks are well specified.</p>\n\n<p>So, this is me documenting how I replaced my <a href=\"https://indieauth.spec.indieweb.org/\">IndieAuth</a> setup to stop leaning on <a href=\"https://aaronparecki.com/\">Aaron\u2019s</a> <a href=\"https://indieauth.com/\">IndieAuth.com</a> (which has been on the verge of retiring any day now for some years).</p>\n\n<p>Please excuse this long and rambling post. Feel free to skip around!</p>\n\n\n\n<h2>What is IndieAuth?</h2>\n\n<p>At a high-level, IndieAuth is a way to sign in using your website as an identity.</p>\n\n<p>Without digging too deeply into the plumbing, you start by updating your website\u2019s homepage with some extra header info that says \u201cmy IndieAuth service is over there\u201d. From there, you can sign into services that support IndieAuth (like the <a href=\"https://indieauth.org/\">IndieWeb wiki</a>, the social feed reader service <a href=\"https://aperture.p3k.io/\">Aperture</a>, and more. And you can use your IndieAuth server to protect your own services, such as a <a href=\"https://indieweb.org/Micropub\">Micropub server</a> that can create new posts on your site.</p>\n\n<h2>Why switch?</h2>\n\n<p>I\u2019ve been using indieauth.com as my IndieAuth setup since late 2016 because it was easy to set up, because it uses something called <a href=\"https://indieweb.org/RelMeAuth\">RelMeAuth</a> to let me sign in using services I already trust (like GitHub).</p>\n\n<p>However, indieauth.com has been growing stale as the IndieAuth spec has evolved. indieauth.com\u2019s maintainer has been <a href=\"https://chat.indieweb.org/dev/2017-12-17#t1513485617181300\">discussing replacing it since at least 2017</a>.</p>\n\n<p>The inciting incident for my switch was looking at <a href=\"https://owncast.online/\">OwnCast</a> - a self-hostable video streaming service with attached chatroom. OwnCast\u2019s chat allows using IndieAuth to sign in, which sounded great to me, but OwnCast\u2019s implementation wasn\u2019t expecting indieauth.com\u2019s old-style response format.</p>\n\n<h3>Why set up my own?</h3>\n\n<p>There are <a href=\"https://indieweb.org/IndieAuth#Server_Implementations\">a bunch of IndieAuth server implementations listed on the IndieWeb wiki</a>. However: simplest of them (selfauth + mintoken) are now out of date with the spec and haven\u2019t been replaced, yet. Others tend to be built into other CMSes like WordPress. A couple of standalone servers exist but are in languages I am not comfortable working in (hello Rust and Go) or have deployment requirements I wasn\u2019t thrilled about supporting (hello Rails).</p>\n\n<p>I found <a href=\"https://github.com/taproot/indieauth\">Taproot/IndieAuth</a> on this page and that looked promising - a PHP library intended to be deployed within a fairly standard PHP web app style (\u201cany PSR-7 compatible app\u201d).</p>\n\n<p>I knew this would be some work but it sounded promising and so I began the week-ish long process of actually writing and deploying that \u201cPSR-7 compatible app\u201d built on taproot/indieauth.</p>\n\n<h2>tl;dr say hello to Belding</h2>\n\n<p><a href=\"https://git.schmarty.net/schmarty/belding\">Belding</a> is an \u201cPSR-7 compatible\u201d PHP web app that provides a standalone IndieAuth endpoint for a single user with a simple password form for authentication.</p>\n\n<p>I would love to go into the process and pitfalls of putting it together, but instead I\u2019ll link to the <a href=\"https://git.schmarty.net/schmarty/belding#user-content-belding\">README</a> where you can learn more about how it works, how to use it, its limitations, etc.</p>\n\n<h2>Switching costs for an IndieAuth server</h2>\n\n<h3>1. Tell the World</h3>\n\n<p>First up, you\u2019ll need to update the headers on your site. I switched my <code>authorization_endpoint</code> and <code>token_endpoint</code> to my new server from indieauth.com. Since I\u2019m updating to support the latest spec, I also added the <code>indieauth-metadata</code> header (which should eventually replace the other two).</p>\n\n<p>Now that your site is advertising the new IndieAuth server, you will likely experience logouts or weird access denied reponses everywhere that your site has been used with IndieAuth.</p>\n\n<h3>2. Tell your own services</h3>\n\n<p>I needed to configure my own \u201crelying apps\u201d so they know to talk to the new server when checking that a request is allowed. This list thankfully wasn\u2019t too long.</p>\n\n<ul><li><a href=\"https://github.com/martymcguire/micropub-1\">My Micropub server</a></li>\n<li><a href=\"https://github.com/martymcguire/spano\">My Micropub media server</a></li>\n</ul><p>Beyond the effort of getting my server working as an indieauth.com replacement, I also took steps to try and support the latest in the IndieAuth spec. That meant updating these micropub servers to use the new \u201ctoken introspection\u201d feature which has some tighter security requirements.</p>\n\n<p>(<em><strong>Note:</strong> I initially made the same change for my self-hosted copy of Aperture, but found it would be too many changes for me to take on at the moment. Instead, I updated by IndieAuth server to allow the older and less secure token verification method used by Aperture.</em>)</p>\n\n<h3>3. Sign-in to all the things again \\o|</h3>\n\n<p>Once all my relying apps were all talking to the new IndieAuth server, it was time to re-sign-in to all the things:</p>\n\n<ul><li><a href=\"https://indieweb.org/\">The IndieWeb wiki</a></li>\n<li>\n<a href=\"https://indieweb.org/Monocle\">Monocle</a> social reader client</li>\n<li>\n<a href=\"https://indieweb.org/Quill\">Quill</a> Micropub posting client</li>\n<li><a href=\"https://indieweb.org/OwnYourSwarm\">OwnYourSwarm</a></li>\n<li>iOS apps\n\n<ul><li><a href=\"https://indieweb.org/Micro.blog\">micro.blog</a></li>\n<li><a href=\"https://indieweb.org/Indigenous_for_iOS\">Indigenous</a></li>\n</ul></li>\n<li>Manually issue new IndieAuth tokens for automation that uses them:\n\n<ul><li>My <a href=\"https://martymcgui.re/2020/10/03/unsubscribing-from-youtubes-recommender/\">personal YouTube manager</a>\n</li>\n<li>My command line tool for media uploads</li>\n<li>iOS shortcuts like the one I use to post <a href=\"https://martymcgui.re/tag/caturday/\">Caturday</a>.</li>\n</ul></li>\n</ul><h2>Takeaways</h2>\n\n<p>There are <a href=\"https://git.schmarty.net/schmarty/belding#user-content-possible-future-work\">a lot of improvements I\u2019d like to make to Belding</a>, but in general I am happy that it seems to work and, outside of the time to develop the server itself, my website and the tools I use to manage it were only broken for about a day.</p>\n\n<p>I think it\u2019d also be really nice to wrap up Belding a bit so it\u2019s easy to configure and deploy on free-and-cheap platforms like <a href=\"https://fly.io/\">fly.io</a>. I believe it should be easier for folks to spin up and control their own IndieWeb building blocks where possible!</p>\n\n<p>It\u2019s also become clear to me that there are some user- and developer-experience holes around setting up relying apps. The auth requirements for token introspection, for example, means you need a way to manage access for each \u201cbackend\u201d that you have that relies on IndieAuth to protect itself!</p>\n\n<p>Long story short (too late) I am finally able to sign into OwnCast server chat using my domain. \ud83d\ude02\ud83d\ude05</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": "https://martymcgui.re/", "photo": "https://martymcgui.re/images/logo.jpg" }, "post-type": "note", "_id": "30589793", "_source": "175", "_is_read": true }