A quick write up by https://zerokspot.com about using their self hostable Webmention service.
{
"type": "entry",
"published": "2020-06-15T11:34:00.00000-07:00",
"url": "https://v2.jacky.wtf/post/b6928a39-e17f-453d-a5b7-2a326b5a71a6",
"bookmark-of": [
"https://zerokspot.com/weblog/2020/06/14/setting-up-webmentiond/"
],
"content": {
"text": "A quick write up by https://zerokspot.com about using their self hostable Webmention service.",
"html": "<p>A quick write up by <a href=\"https://zerokspot.com\">https://zerokspot.com</a> about using their self hostable Webmention service.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "bookmark",
"refs": {
"https://zerokspot.com/weblog/2020/06/14/setting-up-webmentiond/": {
"type": "entry",
"url": "https://zerokspot.com/weblog/2020/06/14/setting-up-webmentiond/",
"content": {
"text": "Since Yarmo asked this morning about how to use webmentiond behind a proxy I noticed that I had completely forgotten to provide a proper getting-started guide. I\u2019m not yet sure how I\u2019ll organise documentation for the project in the long run so I\u2019ll just give you a quick tutorial here using my own setup as example \ud83d\ude42\nGoal\nThe goal of this guide is that you have a webmentiond instance running on your server (in this example yoursite.com), can log into the management interface, and other people can discover your webmention endpoint on your website.\nEnvironment/requirements\nIn my own setup I use Caddy 2 as proxy server but you can use pretty much anything there. The only really hard requirements of webmentiond are that you have Docker running on your server and that your server can connect to an SMTP server (I really like the service offered by Postmark) in order for webmentiond to send out login/authentication tokens via e-mail.\nIn our case, webmentiond should be made available on https://yoursite.com/webmentions/ and I can log into its admin interface through the fictional email address login@yoursite.com.\nStep 1: Setting up webmentiond as service\nSince I use systemd to handle pretty much all services on my services, let\u2019s also use it for webmentiond. The service will be run as the user webmentiond and store all its data into /var/lib/webmentiond belonging to that user:\n$ adduser --home /var/lib/webmentiond webmentiond\n\n# Get the UID of the newly created user:\n$ id webmentiond\n\nNext, I\u2019d suggest pulling the zerok/webmentiond:latest image in order to make sure that Docker is set up properly:\n$ docker pull zerok/webmentiond:latest\n\nFinally, you have to create a service definition (i.e. /etc/systemd/system/webmentiond.service with the following content:\n[Unit]\nDescription=Webmentiond\nAfter=network-online.target\nStartLimitInterval=0\n\n[Service]\nExecStart=/usr/bin/docker run --rm \\\n -e \"MAIL_USER=...\" \\\n -e \"MAIL_PASSWORD=...\" \\\n -e \"MAIL_HOST=...\" \\\n -e \"MAIL_PORT=...\" \\\n -e \"MAIL_FROM=no-reply@yoursite.com\" \\\n -v /var/lib/webmentiond:/data \\\n -p 35080:8080 \\\n -u UID_OF_WEBMENTIOND_USER \\\n zerok/webmentiond:latest \\\n --addr 0.0.0.0:8080 \\\n --allowed-target-domains yoursite.com \\\n --auth-jwt-secret SOME_RANDOM_SECRET_STRING \\\n --auth-admin-emails login@yoursite.com \\\n --public-url https://yoursite.com/webmentions\nRestart=always\nRestartSec=5\n\n\n[Install]\nWantedBy=multi-user.target\n\nOnce that file is in place, start the service:\n$ systemctl daemon-reload\n$ systemctl enable webmentiond\n$ systemctl start webmentiond\n\nNow check if the service was able to start up:\n$ journalctl -f -u webmentiond\nJun 14 09:50:09 ubuntu-512mb-fra1-01 systemd[1]: Started Webmentiond.\nJun 14 09:50:10 ubuntu-512mb-fra1-01 docker[70940]: 9:50AM INF UI path served from /var/lib/webmentiond/frontend\nJun 14 09:50:10 ubuntu-512mb-fra1-01 docker[70940]: 9:50AM INF Listening on 0.0.0.0:8080...\n\nIf you see something else, please make sure that you\u2019ve replaced all those placeholders in the service file \ud83d\ude42\nStep 2: Update reverse proxy config\nIn order to make webmentiond available through https://yoursite.com/webmentions I\u2019ve added the following lines to the host configuration in my Caddyfile:\n# Prevent people from grabbing the exposed Prometheus\n# metrics:\nrespond /webmentions/metrics 404\n\n# Forward /webmentions/*:\nroute /webmentions/* {\n uri strip_prefix /webmentions\n reverse_proxy localhost:35080\n}\n\nNow the UI should be available through https://yoursite.com/webmentions/ui/:\nStep 3: Try to log in\nNow that you have the UI available, try to log in using the email you set in the service definition (in this case login@yoursite.com). You should receive a login token within the next minute or so that you can redeem on the authentication page linked to from the login page. If you didn\u2019t receive a mail, make sure your email settings are correct and that the mail wasn\u2019t flagged as spam or something like that.\nStep 4: Link to the /receive/ endpoint\nIn order for folks to be able to actually send you mentions, they have to know where to send them. The workflow goes something like this:\nAnother blog post with the URL https://a.com/post mentions https://yoursite.com/post.\nThe server at a.com (or another service altogether) checks https://yoursite.com/post looking for a link-element in the markup that looks like this: <link rel=\"webmention\" href=\"https://yoursite.com/webmentions/receive\"> .\nIt finds it, it will send a simple HTTP request to it indicating that https://a.com/post mentioned https://yoursite.com/post.\nIn our case, let\u2019s make sure that we have a working receive endpoint:\n$ curl -i https://yoursite.com/webmentions/receive\nHTTP/2 405\n[...]\n\nLooking good \ud83d\ude42\nNow you have to add the following line to your blog\u2019s head-section:\n<link rel=\"webmention\" href=\"https://yoursite.com/webmentions/receive\">\n\nWith this done, people should be able to send you mentions \ud83d\ude42 One thing, though: Any mention that is sent to the receive-endpoint is first checked for validity (i.e. that the source of the mention really actually links to its target) and only then does it show up in the UI. Once it\u2019s there, you have to explicitly approve a mention before it can be shown on your website. This is there in order to prevent people abusing your blog as link-heaven.\nStep 5: Display mentions\nWebmentiond also comes with a little widget that you can embed in your website for rendering mentions:\n<div class=\"webmentions webmentions-container\"\n data-endpoint=\"https://yoursite.com/webmentions\"\n data-target=\"https://yoursite.com/url/to/post\"></div>\n<script src=\"https://yoursite.com/webmentions/ui/dist/widget.js\"></script>\n\nThis should be it. I\u2019m pretty sure that I\u2019ve forgotten a thing or two or that something is completely unintelligible so please let me know \ud83d\ude05",
"html": "<p>Since <a href=\"https://fosstodon.org/@yarmo/104341371114206112\">Yarmo asked this morning</a> about how to use <a href=\"https://github.com/zerok/webmentiond\">webmentiond</a> behind a proxy I noticed that I had completely forgotten to provide a proper getting-started guide. I\u2019m not yet sure how I\u2019ll organise documentation for the project in the long run so I\u2019ll just give you a quick tutorial here using my own setup as example \ud83d\ude42</p>\n<h2>Goal</h2>\n<p>The goal of this guide is that you have a webmentiond instance running on your server (in this example <code>yoursite.com</code>), can log into the management interface, and other people can discover your webmention endpoint on your website.</p>\n<h2>Environment/requirements</h2>\n<p>In my own setup I use <a href=\"https://caddyserver.com/\">Caddy 2</a> as proxy server but you can use pretty much anything there. The only really hard requirements of webmentiond are that you have <strong>Docker</strong> running on your server and that your server can connect to an <strong>SMTP server</strong> (I really like the service offered by <a href=\"https://postmarkapp.com/\">Postmark</a>) in order for webmentiond to send out login/authentication tokens via e-mail.</p>\n<p>In our case, webmentiond should be made available on <code>https://yoursite.com/webmentions/</code> and I can log into its admin interface through the fictional email address <code>login@yoursite.com</code>.</p>\n<h2>Step 1: Setting up webmentiond as service</h2>\n<p>Since I use systemd to handle pretty much all services on my services, let\u2019s also use it for webmentiond. The service will be run as the user <code>webmentiond</code> and store all its data into <code>/var/lib/webmentiond</code> belonging to that user:</p>\n<pre><code>$ adduser --home /var/lib/webmentiond webmentiond\n\n# Get the UID of the newly created user:\n$ id webmentiond\n</code></pre>\n<p>Next, I\u2019d suggest pulling the <code>zerok/webmentiond:latest</code> image in order to make sure that Docker is set up properly:</p>\n<pre><code>$ docker pull zerok/webmentiond:latest\n</code></pre>\n<p>Finally, you have to create a service definition (i.e. <code>/etc/systemd/system/webmentiond.service</code> with the following content:</p>\n<pre><code>[Unit]\nDescription=Webmentiond\nAfter=network-online.target\nStartLimitInterval=0\n\n[Service]\nExecStart=/usr/bin/docker run --rm \\\n -e \"MAIL_USER=...\" \\\n -e \"MAIL_PASSWORD=...\" \\\n -e \"MAIL_HOST=...\" \\\n -e \"MAIL_PORT=...\" \\\n -e \"MAIL_FROM=no-reply@yoursite.com\" \\\n -v /var/lib/webmentiond:/data \\\n -p 35080:8080 \\\n -u UID_OF_WEBMENTIOND_USER \\\n zerok/webmentiond:latest \\\n --addr 0.0.0.0:8080 \\\n --allowed-target-domains yoursite.com \\\n --auth-jwt-secret SOME_RANDOM_SECRET_STRING \\\n --auth-admin-emails login@yoursite.com \\\n --public-url https://yoursite.com/webmentions\nRestart=always\nRestartSec=5\n\n\n[Install]\nWantedBy=multi-user.target\n</code></pre>\n<p>Once that file is in place, start the service:</p>\n<pre><code>$ systemctl daemon-reload\n$ systemctl enable webmentiond\n$ systemctl start webmentiond\n</code></pre>\n<p>Now check if the service was able to start up:</p>\n<pre><code>$ journalctl -f -u webmentiond\nJun 14 09:50:09 ubuntu-512mb-fra1-01 systemd[1]: Started Webmentiond.\nJun 14 09:50:10 ubuntu-512mb-fra1-01 docker[70940]: 9:50AM INF UI path served from /var/lib/webmentiond/frontend\nJun 14 09:50:10 ubuntu-512mb-fra1-01 docker[70940]: 9:50AM INF Listening on 0.0.0.0:8080...\n</code></pre>\n<p>If you see something else, please make sure that you\u2019ve replaced all those placeholders in the service file \ud83d\ude42</p>\n<h2>Step 2: Update reverse proxy config</h2>\n<p>In order to make webmentiond available through <code>https://yoursite.com/webmentions</code> I\u2019ve added the following lines to the host configuration in my Caddyfile:</p>\n<pre><code># Prevent people from grabbing the exposed Prometheus\n# metrics:\nrespond /webmentions/metrics 404\n\n# Forward /webmentions/*:\nroute /webmentions/* {\n uri strip_prefix /webmentions\n reverse_proxy localhost:35080\n}\n</code></pre>\n<p>Now the UI should be available through <code>https://yoursite.com/webmentions/ui/</code>:</p>\n<img src=\"https://zerokspot.com/media/2020/Screenshot%202020-06-14%20at%2012.05.28.png\" alt=\"Screenshot%202020-06-14%20at%2012.05.28.png\" /><h2>Step 3: Try to log in</h2>\n<p>Now that you have the UI available, try to log in using the email you set in the service definition (in this case <code>login@yoursite.com</code>). You should receive a login token within the next minute or so that you can redeem on the authentication page linked to from the login page. If you didn\u2019t receive a mail, make sure your email settings are correct and that the mail wasn\u2019t flagged as spam or something like that.</p>\n<h2>Step 4: Link to the /receive/ endpoint</h2>\n<p>In order for folks to be able to actually send you mentions, they have to know where to send them. The workflow goes something like this:</p>\n<ol><li>Another blog post with the URL <code>https://a.com/post</code> mentions <code>https://yoursite.com/post</code>.</li>\n<li>The server at a.com (or another service altogether) checks <code>https://yoursite.com/post</code> looking for a link-element in the markup that looks like this: <code><link rel=\"webmention\" href=\"https://yoursite.com/webmentions/receive\"></code> .</li>\n<li>It finds it, it will send a simple HTTP request to it indicating that <code>https://a.com/post</code> mentioned <code>https://yoursite.com/post</code>.</li>\n</ol><p>In our case, let\u2019s make sure that we have a working receive endpoint:</p>\n<pre><code>$ curl -i https://yoursite.com/webmentions/receive\nHTTP/2 405\n[...]\n</code></pre>\n<p>Looking good \ud83d\ude42</p>\n<p>Now you have to add the following line to your blog\u2019s head-section:</p>\n<pre><code><link rel=\"webmention\" href=\"https://yoursite.com/webmentions/receive\">\n</code></pre>\n<p>With this done, people should be able to send you mentions \ud83d\ude42 One thing, though: Any mention that is sent to the receive-endpoint is first checked for validity (i.e. that the source of the mention really actually links to its target) and only then does it show up in the UI. Once it\u2019s there, you have to explicitly <em>approve</em> a mention before it can be shown on your website. This is there in order to prevent people abusing your blog as link-heaven.</p>\n<h2>Step 5: Display mentions</h2>\n<p>Webmentiond also comes with a little widget that you can embed in your website for rendering mentions:</p>\n<pre><code><div class=\"webmentions webmentions-container\"\n data-endpoint=\"https://yoursite.com/webmentions\"\n data-target=\"https://yoursite.com/url/to/post\"></div>\n<script src=\"https://yoursite.com/webmentions/ui/dist/widget.js\"></script>\n</code></pre>\n<p>This <em>should</em> be it. I\u2019m pretty sure that I\u2019ve forgotten a thing or two or that something is completely unintelligible so please let me know \ud83d\ude05</p>"
},
"author": {
"type": "card",
"name": "Horst Gutmann",
"url": "https://zerokspot.com/",
"photo": null
},
"post-type": "note"
}
},
"_id": "12450978",
"_source": "1886",
"_is_read": true
}
{
"type": "entry",
"author": {
"name": "Manton Reece",
"url": "https://www.manton.org/",
"photo": "https://micro.blog/manton/avatar.jpg"
},
"url": "https://www.manton.org/2020/06/15/embedding-microblog-posts.html",
"name": "Embedding microblog posts with Quotebacks",
"content": {
"html": "<p>For a long time I\u2019ve wanted to add quoting tools to Micro.blog, so that it\u2019s even easier to embed text from other blog posts and add your own thoughts. Markdown block quotes are fairly easy, but do require a little more copy/paste work and some editing.</p>\n\n<p>So I was really interested in the recent launch of <a href=\"https://quotebacks.net/\">Quotebacks</a>, from Tom Critchlow and Toby Shorin. We\u2019ve needed a kind of \u201cembed microblog post\u201d feature in Micro.blog, similar to the embedding that Twitter and Facebook have. Quotebacks are exactly that, but they work for anything on the web.</p>\n\n<p>I\u2019d like to run with Quotebacks and see where it leads us. For now, I\u2019ve added \u201cEmbed\u201d links on the Micro.blog Favorites page on the web. This is an experiment. It will likely change, either rolling out in some form to all the platforms, or based on feedback maybe we\u2019ll go in a different direction.</p>\n\n<p>I\u2019ve also <a href=\"https://github.com/microdotblog/quotebacks\">forked the Quotebacks repository</a> and tweaked the JavaScript with a couple changes:</p>\n\n<ul><li>Instead of routing the favicons through Google\u2019s cache, Micro.blog\u2019s version just uses the profile photos on your account directly with a new <code>data-avatar</code> attribute.</li>\n<li>Because copied microblog posts always have a profile photo, it is displayed larger with rounded corners.</li>\n</ul><p>How does this look? I\u2019m embedding a microblog post below using this feature:</p>\n\n<p></p><blockquote cite=\"https://www.manton.org/2020/06/12/weve-posted-core.html\"><p>We\u2019ve posted <a href=\"https://coreint.org/2020/06/episode-424-the-worst-transition/\">Core Int 424</a>, talking with <a href=\"https://micro.blog/danielpunkass\">@danielpunkass</a> about the ARM rumor, how it compares to previous transitions, WWDC, and more.</p>\nManton Reece<a href=\"https://www.manton.org/2020/06/12/weve-posted-core.html\">https://www.manton.org/2020/06/12/weve-posted-core.html</a></blockquote><p>I\u2019ve kept the \u201cEmbed\u201d links isolated to the Favorites page so we can try a few things without disrupting the rest of your Micro.blog workflow. There are other questions to answer, such as how this should integrate with sending Webmentions, but I think having something like this to play with is a good first step.</p>",
"text": "For a long time I\u2019ve wanted to add quoting tools to Micro.blog, so that it\u2019s even easier to embed text from other blog posts and add your own thoughts. Markdown block quotes are fairly easy, but do require a little more copy/paste work and some editing.\n\nSo I was really interested in the recent launch of Quotebacks, from Tom Critchlow and Toby Shorin. We\u2019ve needed a kind of \u201cembed microblog post\u201d feature in Micro.blog, similar to the embedding that Twitter and Facebook have. Quotebacks are exactly that, but they work for anything on the web.\n\nI\u2019d like to run with Quotebacks and see where it leads us. For now, I\u2019ve added \u201cEmbed\u201d links on the Micro.blog Favorites page on the web. This is an experiment. It will likely change, either rolling out in some form to all the platforms, or based on feedback maybe we\u2019ll go in a different direction.\n\nI\u2019ve also forked the Quotebacks repository and tweaked the JavaScript with a couple changes:\n\nInstead of routing the favicons through Google\u2019s cache, Micro.blog\u2019s version just uses the profile photos on your account directly with a new data-avatar attribute.\nBecause copied microblog posts always have a profile photo, it is displayed larger with rounded corners.\nHow does this look? I\u2019m embedding a microblog post below using this feature:\n\nWe\u2019ve posted Core Int 424, talking with @danielpunkass about the ARM rumor, how it compares to previous transitions, WWDC, and more.\nManton Reecehttps://www.manton.org/2020/06/12/weve-posted-core.htmlI\u2019ve kept the \u201cEmbed\u201d links isolated to the Favorites page so we can try a few things without disrupting the rest of your Micro.blog workflow. There are other questions to answer, such as how this should integrate with sending Webmentions, but I think having something like this to play with is a good first step."
},
"published": "2020-06-15T11:07:16-05:00",
"category": [
"Essays",
"Podcasts"
],
"post-type": "article",
"_id": "12448440",
"_source": "12",
"_is_read": true
}
There are many reasons to delete your Facebook account, so let's start with the assumption you've already made the decision. Here are a few things to know before you press the big "Delete" button.
{
"type": "entry",
"published": "2020-06-14T15:09:50-07:00",
"summary": "There are many reasons to delete your Facebook account, so let's start with the assumption you've already made the decision. Here are a few things to know before you press the big \"Delete\" button.",
"url": "https://aaronparecki.com/2020/06/14/14/how-to-leave-facebook",
"category": [
"facebook",
"indieweb"
],
"name": "How to Leave Facebook",
"author": {
"type": "card",
"name": "Aaron Parecki",
"url": "https://aaronparecki.com/",
"photo": "https://aperture-media.p3k.io/aaronparecki.com/41061f9de825966faa22e9c42830e1d4a614a321213b4575b9488aa93f89817a.jpg"
},
"post-type": "article",
"_id": "12429054",
"_source": "16",
"_is_read": true
}
I’ve used a variety of approaches over the years, from manual to semi-automatic. Here’s some different things I’ve done:
Initially I would publish a note, then use the interactive Bridgy Publish form from my account page. Your account page is https://brid.gy/twitter/isellsoap. Paste the URL of your note there, choose the options whether you want your original link appended to the tweet, then preview it. If it looks good, publish it. I then would copy the tweet’s URL and add it on my original note as a syndication link. See below on this note for an example of that syndication link.
After I did that for a while and it was working smoothly, I started to automate it more. Bridgy Publish lets you send a webmention to trigger the publish. I set up a custom bit of PHP code that would let me click a button to send off that webmention for the note I wanted to publish. Sending a webmention is a pretty simple POST request, so I used the WireHTTP class for that. When publishing to Twitter, the successful Bridgy response includes the Twitter API data for the tweet. I wrote some more code that processes that response to get the tweet’s URL and updates the syndication link on the note.
Note that all of this is separate from the Webmention plugin itself. The code for my semi-automatic publishing isn’t part of a plugin and isn’t very polished code, so I haven’t released any of it. If I can find a way to make it more user-friendly, I might release it, or at least write a tutorial with more guidance.
https://php.microformats.io is a useful tool to debug the microformats in your posts, by the way. Here’s the parsed result of this very note. The in-reply-to
property is what Bridgy Publish uses to post a reply tweet. The syndication
property is one way Bridgy maps your original post to the Twitter copy for sending responses back to you — particularly if you don’t include your original post link in the tweet.
{
"type": "entry",
"published": "2020-06-13 16:39-0700",
"summary": "I\u2019ve used a variety of approaches over the years, from manual to semi-automatic. Here\u2019s some different things I\u2019ve done:",
"url": "https://gregorlove.com/2020/06/ive-used-a-variety/",
"syndication": [
"https://twitter.com/gRegorLove/status/1271952103502741506"
],
"in-reply-to": [
"https://twitter.com/isellsoap/status/1271425693671399424"
],
"content": {
"text": "I\u2019ve used a variety of approaches over the years, from manual to semi-automatic. Here\u2019s some different things I\u2019ve done:\n\nInitially I would publish a note, then use the interactive Bridgy Publish form from my account page. Your account page is https://brid.gy/twitter/isellsoap. Paste the URL of your note there, choose the options whether you want your original link appended to the tweet, then preview it. If it looks good, publish it. I then would copy the tweet\u2019s URL and add it on my original note as a syndication link. See below on this note for an example of that syndication link.\n\nAfter I did that for a while and it was working smoothly, I started to automate it more. Bridgy Publish lets you send a webmention to trigger the publish. I set up a custom bit of PHP code that would let me click a button to send off that webmention for the note I wanted to publish. Sending a webmention is a pretty simple POST request, so I used the WireHTTP class for that. When publishing to Twitter, the successful Bridgy response includes the Twitter API data for the tweet. I wrote some more code that processes that response to get the tweet\u2019s URL and updates the syndication link on the note.\n\nNote that all of this is separate from the Webmention plugin itself. The code for my semi-automatic publishing isn\u2019t part of a plugin and isn\u2019t very polished code, so I haven\u2019t released any of it. If I can find a way to make it more user-friendly, I might release it, or at least write a tutorial with more guidance.\n\nhttps://php.microformats.io is a useful tool to debug the microformats in your posts, by the way. Here\u2019s the parsed result of this very note. The in-reply-to property is what Bridgy Publish uses to post a reply tweet. The syndication property is one way Bridgy maps your original post to the Twitter copy for sending responses back to you \u2014 particularly if you don\u2019t include your original post link in the tweet.",
"html": "<p class=\"p-summary\">I\u2019ve used a variety of approaches over the years, from manual to semi-automatic. Here\u2019s some different things I\u2019ve done:</p>\n\n<p>Initially I would publish a note, then use the interactive Bridgy Publish form from my <a href=\"https://brid.gy/twitter/gRegorLove\">account page</a>. Your account page is <a href=\"https://brid.gy/twitter/isellsoap\">https://brid.gy/twitter/isellsoap</a>. Paste the URL of your note there, choose the options whether you want your original link appended to the tweet, then preview it. If it looks good, publish it. I then would copy the tweet\u2019s URL and add it on my original note as a <a href=\"https://brid.gy/about#link\">syndication link</a>. See below on this note for an example of that syndication link.</p>\n\n<p>After I did that for a while and it was working smoothly, I started to automate it more. Bridgy Publish lets you <a href=\"https://brid.gy/about#webmentions\">send a webmention</a> to trigger the publish. I set up a custom bit of PHP code that would let me click a button to send off that webmention for the note I wanted to publish. Sending a webmention is a pretty simple POST request, so I used the WireHTTP class for that. When publishing to Twitter, the successful Bridgy response includes the Twitter API data for the tweet. I wrote some more code that processes that response to get the tweet\u2019s URL and updates the syndication link on the note.</p>\n\n<p>Note that all of this is separate from the Webmention plugin itself. The code for my semi-automatic publishing isn\u2019t part of a plugin and isn\u2019t very polished code, so I haven\u2019t released any of it. If I can find a way to make it more user-friendly, I might release it, or at least write a tutorial with more guidance.</p>\n\n<p><a href=\"https://php.microformats.io/\">https://php.microformats.io</a> is a useful tool to debug the microformats in your posts, by the way. Here\u2019s the <a href=\"https://php.microformats.io/?url=https://gregorlove.com/2020/06/ive-used-a-variety/\">parsed result of this very note</a>. The <code>in-reply-to</code> property is what Bridgy Publish uses to post a reply tweet. The <code>syndication</code> property is one way Bridgy maps your original post to the Twitter copy for sending responses back to you \u2014 particularly if you don\u2019t include your original post link in the tweet.</p>"
},
"post-type": "reply",
"refs": {
"https://twitter.com/isellsoap/status/1271425693671399424": {
"type": "entry",
"url": "https://twitter.com/isellsoap/status/1271425693671399424",
"name": "https://twitter.com/isellsoap/status/1271425693671399424",
"post-type": "article"
}
},
"_id": "12409718",
"_source": "95",
"_is_read": true
}
This looks like a nifty tool for blogs:
Quotebacks is a tool that makes it easy to grab snippets of text from around the web and convert them into embeddable blockquote web components.
{
"type": "entry",
"published": "2020-06-13T12:24:25Z",
"url": "https://adactio.com/links/17003",
"category": [
"quotebacks",
"blockquotes",
"citations",
"quoting",
"blogs",
"blogging",
"citing",
"publishing",
"indieweb"
],
"bookmark-of": [
"https://quotebacks.net/"
],
"content": {
"text": "Quotebacks\n\n\n\nThis looks like a nifty tool for blogs:\n\n\n Quotebacks is a tool that makes it easy to grab snippets of text from around the web and convert them into embeddable blockquote web components.",
"html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://quotebacks.net/\">\nQuotebacks\n</a>\n</h3>\n\n<p>This looks like a nifty tool for blogs:</p>\n\n<blockquote>\n <p>Quotebacks is a tool that makes it easy to grab snippets of text from around the web and convert them into embeddable blockquote web components.</p>\n</blockquote>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://adactio.com/images/photo-150.jpg"
},
"post-type": "bookmark",
"_id": "12398245",
"_source": "2",
"_is_read": true
}
I am proposing a session for IndieWebCamp West Coast: “Keeping Track of Books and Reading Progress.”
I would like to discuss the use-cases and experiences of using our websites to:
- track books we want to read
- categorize (or “shelve”) books
- track reading progress
Most of my personal experience has been around tracking books I want to read. It is probably more accurate to classify those as want posts instead of read posts. I’d like to discuss the differences between these three types of posts and what they look like on our sites. Regarding categorizing books, we should also discuss Library JSON.
This session is broader than indiebookclub but will likely have an impact on it. indiebookclub creates posts with a status of to-read, reading, or finished. The first is probably a want post and the others seem to be reading progress posts.
{
"type": "entry",
"published": "2020-06-12 17:40-0700",
"url": "https://gregorlove.com/2020/06/i-am-proposing-a-session/",
"content": {
"text": "I am proposing a session for IndieWebCamp West Coast: \u201cKeeping Track of Books and Reading Progress.\u201d\n\nI would like to discuss the use-cases and experiences of using our websites to:\n\ntrack books we want to read\n\tcategorize (or \u201cshelve\u201d) books\n\ttrack reading progress\nMost of my personal experience has been around tracking books I want to read. It is probably more accurate to classify those as want posts instead of read posts. I\u2019d like to discuss the differences between these three types of posts and what they look like on our sites. Regarding categorizing books, we should also discuss Library JSON.\n\nThis session is broader than indiebookclub but will likely have an impact on it. indiebookclub creates posts with a status of to-read, reading, or finished. The first is probably a want post and the others seem to be reading progress posts.",
"html": "<p>I am proposing a session for <a href=\"https://indieweb.org/2020/West\">IndieWebCamp West Coast</a>: \u201cKeeping Track of Books and Reading Progress.\u201d</p>\n\n<p>I would like to discuss the use-cases and experiences of using our websites to:</p>\n\n<ol><li>track books we want to read</li>\n\t<li>categorize (or \u201cshelve\u201d) books</li>\n\t<li>track reading progress</li>\n</ol><p>Most of my personal experience has been around tracking books I want to read. It is probably more accurate to classify those as <b><a href=\"https://indieweb.org/want\">want</a></b> posts instead of <b><a href=\"https://indieweb.org/read\">read</a></b> posts. I\u2019d like to discuss the differences between these three types of posts and what they look like on our sites. Regarding categorizing books, we should also discuss <a href=\"https://tomcritchlow.com/2020/04/15/library-json/\">Library JSON</a>.</p>\n\n<p>This session is broader than <a href=\"https://indiebookclub.biz/\">indiebookclub</a> but will likely have an impact on it. indiebookclub creates posts with a status of to-read, reading, or finished. The first is probably a <i>want</i> post and the others seem to be <i>reading progress</i> posts.</p>"
},
"post-type": "note",
"_id": "12390210",
"_source": "95",
"_is_read": true
}
I noticed I wasn’t seeing your feed in Monocle. It looks like your jsonfeed doesn’t validate: https://validator.jsonfeed.org/?url=https%3A%2F%2Fpine.blog%2Fu%2Fsonicrocketman%2Ffeed.json
I subscribed to your microformats feed and that’s working smoothly!
{
"type": "entry",
"published": "2020-06-12 16:36-0700",
"url": "https://gregorlove.com/2020/06/i-noticed-i-wasnt-seeing/",
"in-reply-to": [
"https://pine.blog/u/sonicrocketman"
],
"content": {
"text": "I noticed I wasn\u2019t seeing your feed in Monocle. It looks like your jsonfeed doesn\u2019t validate: https://validator.jsonfeed.org/?url=https%3A%2F%2Fpine.blog%2Fu%2Fsonicrocketman%2Ffeed.json\n\nI subscribed to your microformats feed and that\u2019s working smoothly!",
"html": "<p>I noticed I wasn\u2019t seeing your feed in Monocle. It looks like your jsonfeed doesn\u2019t validate: <a href=\"https://validator.jsonfeed.org/?url=https%3A%2F%2Fpine.blog%2Fu%2Fsonicrocketman%2Ffeed.json\">https://validator.jsonfeed.org/?url=https%3A%2F%2Fpine.blog%2Fu%2Fsonicrocketman%2Ffeed.json</a></p>\n\n<p>I subscribed to your microformats feed and that\u2019s working smoothly!</p>"
},
"post-type": "reply",
"refs": {
"https://pine.blog/u/sonicrocketman": {
"type": "entry",
"url": "https://pine.blog/u/sonicrocketman",
"name": "https://pine.blog/u/sonicrocketman",
"post-type": "article"
}
},
"_id": "12390211",
"_source": "95",
"_is_read": true
}
Congratulations and kudos to Phil for twenty years of blogging!
Here he describes what it was like online in the year 2000. Yes, it was very different to today, but…
Anyone who thinks blogging died at some point in the past twenty years presumably just lost interest themselves, because there have always been plenty of blogs to read. Some slow down, some die, new ones appear. It’s as easy as it’s ever been to write and read blogs.
Though Phil does note:
Some of the posts I read were very personal in a way that’s less common now, in general. … Even “personal” websites (like mine) often have an awareness about them, about what’s being shared, the impression it gives to strangers, presenting a public face, maybe a feeling of, “I’m just writing personal nonsense but, why, yes, I am available for hire”.
Maybe that’s why I’m enjoying Robin’s writing so much.
{
"type": "entry",
"published": "2020-06-12T14:49:15Z",
"url": "https://adactio.com/links/17000",
"category": [
"indieweb",
"personal",
"publishing",
"blogs",
"blogging",
"2000",
"sxsw",
"online",
"sharing",
"honesty"
],
"bookmark-of": [
"https://www.gyford.com/phil/writing/2020/06/11/weblogs-2000/"
],
"content": {
"text": "What was it like? (Phil Gyford\u2019s website)\n\n\n\nCongratulations and kudos to Phil for twenty years of blogging!\n\nHere he describes what it was like online in the year 2000. Yes, it was very different to today, but\u2026\n\n\n Anyone who thinks blogging died at some point in the past twenty years presumably just lost interest themselves, because there have always been plenty of blogs to read. Some slow down, some die, new ones appear. It\u2019s as easy as it\u2019s ever been to write and read blogs.\n\n\nThough Phil does note:\n\n\n Some of the posts I read were very personal in a way that\u2019s less common now, in general. \u2026 Even \u201cpersonal\u201d websites (like mine) often have an awareness about them, about what\u2019s being shared, the impression it gives to strangers, presenting a public face, maybe a feeling of, \u201cI\u2019m just writing personal nonsense but, why, yes, I am available for hire\u201d.\n\n\nMaybe that\u2019s why I\u2019m enjoying Robin\u2019s writing so much.",
"html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://www.gyford.com/phil/writing/2020/06/11/weblogs-2000/\">\nWhat was it like? (Phil Gyford\u2019s website)\n</a>\n</h3>\n\n<p>Congratulations and kudos to Phil for twenty years of blogging!</p>\n\n<p>Here he describes what it was like online in the year 2000. Yes, it was very different to today, but\u2026</p>\n\n<blockquote>\n <p>Anyone who thinks blogging died at some point in the past twenty years presumably just lost interest themselves, because there have always been plenty of blogs to read. Some slow down, some die, new ones appear. It\u2019s as easy as it\u2019s ever been to write and read blogs.</p>\n</blockquote>\n\n<p>Though Phil does note:</p>\n\n<blockquote>\n <p>Some of the posts I read were very personal in a way that\u2019s less common now, in general. \u2026 Even \u201cpersonal\u201d websites (like mine) often have an awareness about them, about what\u2019s being shared, the impression it gives to strangers, presenting a public face, maybe a feeling of, \u201cI\u2019m just writing personal nonsense but, why, yes, I am available for hire\u201d.</p>\n</blockquote>\n\n<p>Maybe that\u2019s why I\u2019m enjoying <a href=\"https://www.robinrendle.com/notes/between-the-third-and-fifth-apology\">Robin\u2019s writing</a> so much.</p>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://adactio.com/images/photo-150.jpg"
},
"post-type": "bookmark",
"_id": "12374648",
"_source": "2",
"_is_read": true
}
When I log onto someone’s website I want them to tell me why they’re weird. Where’s the journal or scrapbook? Where’s your stamp collection? Or the works-in-progress, the failed attempts, the clunky unfinished things?
{
"type": "entry",
"published": "2020-06-12T14:46:58Z",
"url": "https://adactio.com/links/16999",
"category": [
"indieweb",
"personal",
"publishing",
"imperfection",
"honesty"
],
"bookmark-of": [
"https://www.robinrendle.com/notes/2d-websites.html"
],
"content": {
"text": "Robin Rendle\u2005\uff65\u20052D Websites\n\n\n\n\n When I log onto someone\u2019s website I want them to tell me why they\u2019re weird. Where\u2019s the journal or scrapbook? Where\u2019s your stamp collection? Or the works-in-progress, the failed attempts, the clunky unfinished things?",
"html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://www.robinrendle.com/notes/2d-websites.html\">\nRobin Rendle\u2005\uff65\u20052D Websites\n</a>\n</h3>\n\n<blockquote>\n <p>When I log onto someone\u2019s website I want them to tell me why they\u2019re weird. Where\u2019s the journal or scrapbook? Where\u2019s your stamp collection? Or the works-in-progress, the failed attempts, the clunky unfinished things?</p>\n</blockquote>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://adactio.com/images/photo-150.jpg"
},
"post-type": "bookmark",
"_id": "12374649",
"_source": "2",
"_is_read": true
}
Added the back end code to handle the act of sending relayed Webmentions. It’ll be handy for me when/if my site goes down and other sites are smart enough to cache endpoints used. I should add some of the endpoint caching stuff.
But most importantly, I have to implement the API that can list out Webmentions for things to use. From there, I want to build a set of custom components that can implement a comments section that’ll be connected to Lighthouse via the subscription system Phoenix has.
{
"type": "entry",
"published": "2020-06-12T01:42:00.00000-07:00",
"url": "https://v2.jacky.wtf/post/f964ac50-d9bf-4352-9b47-76663efb026b",
"category": [
"lighthouse"
],
"content": {
"text": "Added the back end code to handle the act of sending relayed Webmentions. It\u2019ll be handy for me when/if my site goes down and other sites are smart enough to cache endpoints used. I should add some of the endpoint caching stuff.But most importantly, I have to implement the API that can list out Webmentions for things to use. From there, I want to build a set of custom components that can implement a comments section that\u2019ll be connected to Lighthouse via the subscription system Phoenix has.",
"html": "<p>Added the back end code to handle the act of sending relayed Webmentions. It\u2019ll be handy for me when/if my site goes down and other sites are smart enough to cache endpoints used. I should add some of the endpoint caching stuff.</p><p>But most importantly, I have to implement the API that can list out Webmentions for things to use. From there, I want to build a set of custom components that can implement a comments section that\u2019ll be connected to Lighthouse via the subscription system Phoenix has.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "note",
"_id": "12367620",
"_source": "1886",
"_is_read": true
}
Definitely going to refactor how I handle Webmention feed rendering so I can use it as the basis in my other feeds. It’s so much easier to do it in HTML oddly enough.
{
"type": "entry",
"published": "2020-06-10T04:35:00.00000-07:00",
"url": "https://v2.jacky.wtf/post/4b4b888a-7e92-4af6-ad4b-ec20e2c8a164",
"content": {
"text": "Definitely going to refactor how I handle Webmention feed rendering so I can use it as the basis in my other feeds. It\u2019s so much easier to do it in HTML oddly enough.",
"html": "<p>Definitely going to refactor how I handle Webmention feed rendering so I can use it as the basis in my other feeds. It\u2019s so much easier to do it in HTML oddly enough.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "note",
"_id": "12314068",
"_source": "1886",
"_is_read": true
}
Interesting points on some stuff we’re looking into the IndieWeb, when it comes to handling meta-data.
{
"type": "entry",
"published": "2020-06-09T19:35:00.00000-07:00",
"url": "https://v2.jacky.wtf/post/954383dc-11ce-49f1-b03e-76d0f751f46a",
"bookmark-of": [
"https://marinintim.com/2020/your-data/"
],
"content": {
"text": "Interesting points on some stuff we\u2019re looking into the IndieWeb, when it comes to handling meta-data.",
"html": "<p>Interesting points on some stuff we\u2019re looking into the IndieWeb, when it comes to handling meta-data.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "bookmark",
"refs": {
"https://marinintim.com/2020/your-data/": {
"type": "entry",
"url": "https://marinintim.com/2020/your-data/",
"content": {
"text": "On the weekend, publisher Pragmatic Programmers migrated to a new system, which is noticeably faster than the previous one. That's good. But the new version lacks the wish list.\nNow, I don't know if it's an artifact of migration and wish list is to be reinstated, or if it was a deliberate decision to drop the feature that probably isn't used by the majority of buyers. But it made me aware, that my \"data\" is way broader then I thought before.\nI've blogged about Indieweb movement at length the last year (in Russian), but even then I mostly thought about my data as data that I consciously create: photos, essays, lame jokes, et cetera. Turns out, my wish list was also useful to me, and I miss it. The same is true, say, about my YouTube watch history and Watch Later list, I regularly refer to it to find some weird video I watched a few days ago.\nI don't think that any decision in the chain of events that led to me missing my wish list was malicious, but such is the nature of complex systems, especially web services, that they produce unintended outcomes. That's okay, losing wish list is not a big deal.\nThis incident made me even more aware that the only data I'm guaranteed to be able to access is the data hosted under my control, either on my own disks, or on the disks of my hosting provider.\nWhat other data I'm not thinking of? That's hard to tell, because this data is produced reactively, as a side effect of using web services normally. Message archives in proprietary services (Telegram, FB, VK, and others), upvoted links to research later on Lobsters and others websites, the set of subscriptions.\nI also store my \"books to buy\" lists at Amazon.com as Wish Lists, which could also disappear at any moment, and in the Cart, which may get emptied. These lists act as my own bibliography of things I'm interested to learn more about, so they do have value on their own.\nI'm planning to migrate these lists to my web server as a simple HTML file. HTML files do not require maintenance and also have zero marginal costs.\nAs to PragProg wish list, I guess I'd have to buy every book they have, 'cause every book published by them that I've read was great.",
"html": "<p>On the weekend, publisher <a href=\"https://pragprog.com\">Pragmatic Programmers</a> migrated to a new system, which is noticeably faster than the previous one. That's good. But the new version lacks the wish list.</p>\n<p>Now, I don't know if it's an artifact of migration and wish list is to be reinstated, or if it was a deliberate decision to drop the feature that probably isn't used by the majority of buyers. But it made me aware, that my \"data\" is way broader then I thought before.</p>\n<p>I've blogged about <a href=\"https://indieweb.org\">Indieweb</a> movement at length <a href=\"https://marinintim.com/2019/indieweb\">the last year (in Russian)</a>, but even then I mostly thought about my data as data that I consciously create: photos, essays, lame jokes, et cetera. Turns out, my wish list was also useful to me, and I miss it. The same is true, say, about my YouTube watch history and Watch Later list, I regularly refer to it to find some weird video I watched a few days ago.</p>\n<p>I don't think that any decision in the chain of events that led to me missing my wish list was malicious, but such is the nature of complex systems, especially web services, that they produce unintended outcomes. That's okay, losing wish list is not a big deal.</p>\n<p>This incident made me even more aware that the only data I'm guaranteed to be able to access is the data hosted under my control, either on my own disks, or on the disks of my hosting provider.</p>\n<p>What other data I'm not thinking of? That's hard to tell, because this data is produced reactively, as a side effect of using web services normally. Message archives in proprietary services (Telegram, FB, VK, and others), upvoted links to research later on Lobsters and others websites, the set of subscriptions.</p>\n<p>I also store my \"books to buy\" lists at Amazon.com as Wish Lists, which could also disappear at any moment, and in the Cart, which may get emptied. These lists act as my own bibliography of things I'm interested to learn more about, so they do have value on their own.</p>\n<p>I'm planning to migrate these lists to my web server as a simple HTML file. HTML files do not require maintenance and also have zero marginal costs.</p>\n<p>As to PragProg wish list, I guess I'd have to buy every book they have, 'cause every book published by them that I've read was great.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf/false",
"photo": null
},
"post-type": "note"
}
},
"_id": "12306126",
"_source": "1886",
"_is_read": true
}
Okay, fixed up my Webmentions (only the JSON feed). I need to fix the hfeed of Webmentions and see if I can use that to provide richer representation of it because JSON feed is a little janky.
{
"type": "entry",
"published": "2020-06-09T15:03:00.00000-07:00",
"url": "https://v2.jacky.wtf/post/8b7f3ca5-fe9b-4436-8d19-b79daafbc732",
"content": {
"text": "Okay, fixed up my Webmentions (only the JSON feed). I need to fix the hfeed of Webmentions and see if I can use that to provide richer representation of it because JSON feed is a little janky.",
"html": "<p>Okay, fixed up my Webmentions (only the JSON feed). I need to fix the hfeed of Webmentions and see if I can use that to provide richer representation of it because JSON feed is a little janky.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "note",
"_id": "12301017",
"_source": "1886",
"_is_read": true
}
{
"type": "entry",
"published": "2020-06-09T13:05:30.97684-07:00",
"url": "https://v2.jacky.wtf/post/1149bdd8-a3ac-43b8-b15e-94387bc136d7",
"category": [
"lwa"
],
"name": "An Idea on Following People (or Organizations or Whomever) in the IndieWeb for Lwa",
"content": {
"text": "After a bit of discussion in the IndieWeb channels (start here and end about here), I think I have an idea on how I want Lwa to handle \"people\". Tantek made a good point about optimizing the experience to follow people (or any sort of \"entity\" that isn't specifically a site) - that's a behavior people are familiar with and it's natural. This got me thinking about how I want this to look in my social reader.Setting up the EnvironmentThis requires a bit of contextual setup - where would someone find someone to \"follow\"? I'll optimize this to be in the case of someone who's already logged into Lwa (which assumes that they have a IndieWeb site) and are already following some people.A preview of a post in Lwa.From the screenshot above, I can think of the following ways to follow someone from this screenshot of a post in the feed for Lwa:Adding a button next to the post-type icon to follow the person.\nAdding a pop-over modal that'd provide a \"best-effort\" representation of the entity that made that post.\nProviding an intent to follow page that'll provide information on that user\nThe first two can be collapsed into a on-page experience that'll present some information about the content's author via a modal of some sorts. Ideally, that'll have information about them (name, photo and whatever else is semantically relevant). There, we can have a page that'll heads to the \"intent to follow\" page mentioned in the last step. This is where things get a bit interesting.An Intent to FollowThe intent to follow page will be vital. I haven't built one yet but it'll be something that'll combine the following sources of data:a \"best effort\" representation of the author\nconditionally presenting the post that brought us here\na way to view the multiple feeds presented by the author\nI'm thinking about making this page accessible by either providing the author URL itself or the contextual post to show. The latter will be something that helps people just finding content. The method of composing the author representation will be something of a collection of the following with (unspecified) weights:fetching rel=me information from the provided URL\nfetching the representative h-card of the URL\nopting to use Microformats over anything else when it's available (specifically, Microformats, Activity Streams and then whatever is left over)\nBuilding a list of the feeds for this user would be done the same way. One thing that'll help with optimization of these feeds is, in the event that they have a Microformats2 site, we can collapse syndicated posts into the site's post so the feed doesn't look too noisy. With these feeds collected, we can do some light metrics crunching about post frequency, the average interaction rate (if any) for posts in a feed. With that, we can give the user in question a choice of what feed that they'd like to follow (and into which channel). I don't have an immediate mock-up of that page but I can see it being a mix of Twitter's user profile pages and the now-defunct Twitter intent-to-follow pages. There's a case to be made to allow for a \"catch-all\" channel for people to use but I think I'll make that an opt-in function of Lwa; having such a thing tends to enable more lock-in.I'll make another post once I get around to the demo of this feature in Lwa. For now, I'm down to get feedback about such a flow!",
"html": "<p>After a bit of discussion in the IndieWeb channels (start <a href=\"https://chat.indieweb.org/meta/2020-06-09#t1591724374780000\">here</a> and end about <a href=\"https://chat.indieweb.org/meta/2020-06-09#t1591727624999000\">here</a>), I think I have an idea on how I want <a href=\"https://lwa.black.af\">Lwa</a> to handle \"people\". <a href=\"http://tantek.com\">Tantek</a> made a good point about optimizing the experience to follow people (or any sort of \"entity\" that isn't specifically a site) - that's a behavior people are familiar with and it's natural. This got me thinking about how I want this to look in my social reader.</p><h3>Setting up the Environment</h3><p>This requires a bit of contextual setup - where would someone find someone to \"follow\"? I'll optimize this to be in the case of someone who's already logged into Lwa (which assumes that they have a IndieWeb site) and are already following some people.</p><img src=\"https://v2.jacky.wtf/media/image/floating/Screenshot_20200609_120621.png?v=original\" alt=\"Screenshot_20200609_120621.png?v=original\" />A preview of a post in Lwa.<p>From the screenshot above, I can think of the following ways to follow someone from this screenshot of a post in the feed for Lwa:</p><ul><li>Adding a button next to the post-type icon to follow the person.</li>\n<li>Adding a pop-over modal that'd provide a \"best-effort\" representation of the entity that made that post.</li>\n<li>Providing an intent to follow page that'll provide information on that user</li>\n</ul><p>The first two can be collapsed into a on-page experience that'll present some information about the content's author via a modal of some sorts. Ideally, that'll have information about them (name, photo and whatever else is semantically relevant). There, we can have a page that'll heads to the \"intent to follow\" page mentioned in the last step. This is where things get a bit interesting.</p><h3>An Intent to Follow</h3><p>The intent to follow page will be vital. I haven't built one yet but it'll be something that'll combine the following sources of data:</p><ul><li>a \"best effort\" representation of the author</li>\n<li>conditionally presenting the post that brought us here</li>\n<li>a way to view the multiple feeds presented by the author</li>\n</ul><p>I'm thinking about making this page accessible by either providing the author URL itself or the contextual post to show. The latter will be something that helps people just finding content. The method of composing the author representation will be something of a collection of the following with (unspecified) weights:</p><ul><li>fetching rel=me information from the provided URL</li>\n<li>fetching the representative h-card of the URL</li>\n<li>opting to use Microformats over anything else when it's available (specifically, Microformats, Activity Streams and then whatever is left over)</li>\n</ul><p>Building a list of the feeds for this user would be done the same way. One thing that'll help with optimization of these feeds is, in the event that they have a Microformats2 site, we can collapse syndicated posts into the site's post so the feed doesn't look too noisy. With these feeds collected, we can do some light metrics crunching about post frequency, the average interaction rate (if any) for posts in a feed. With that, we can give the user in question a choice of what feed that they'd like to follow (and into which channel). I don't have an immediate mock-up of that page but I can see it being a mix of Twitter's user profile pages and the now-defunct Twitter intent-to-follow pages. There's a case to be made to allow for a \"catch-all\" channel for people to use but I think I'll make that an opt-in function of Lwa; having such a thing tends to enable more lock-in.</p><p>I'll make another post once I get around to the demo of this feature in Lwa. For now, I'm down to get feedback about such a flow!</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "article",
"_id": "12298628",
"_source": "1886",
"_is_read": true
}
I’m also losing Webmentions left and right due to this hard failure.
{
"type": "entry",
"published": "2020-06-09T11:27:00.00000-07:00",
"url": "https://v2.jacky.wtf/post/3340084e-6531-42ad-b40d-58b1c4ec7546",
"content": {
"text": "I\u2019m also losing Webmentions left and right due to this hard failure.",
"html": "<p>I\u2019m also losing Webmentions left and right due to this hard failure.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "note",
"_id": "12297326",
"_source": "1886",
"_is_read": true
}
I’ll be attending IndieWebCamp West later this month. It’s online-only, centered in the west coast timezone, but anyone around the world is welcome. Need to think about IndieWeb-related goals for Micro.blog that week.
{
"type": "entry",
"author": {
"name": "Manton Reece",
"url": "https://www.manton.org/",
"photo": "https://micro.blog/manton/avatar.jpg"
},
"url": "https://www.manton.org/2020/06/08/ill-be-attending.html",
"content": {
"html": "<p>I\u2019ll be attending <a href=\"https://events.indieweb.org/2020/06/indiewebcamp-west-2020-ZB8zoAAu6sdN\">IndieWebCamp West</a> later this month. It\u2019s online-only, centered in the west coast timezone, but anyone around the world is welcome. Need to think about IndieWeb-related goals for Micro.blog that week.</p>",
"text": "I\u2019ll be attending IndieWebCamp West later this month. It\u2019s online-only, centered in the west coast timezone, but anyone around the world is welcome. Need to think about IndieWeb-related goals for Micro.blog that week."
},
"published": "2020-06-08T11:36:44-05:00",
"post-type": "note",
"_id": "12266050",
"_source": "12",
"_is_read": true
}
Ugh, finally fixed my Webmentions and from my recordings, it’s even faster than before. This is so exciting.
{
"type": "entry",
"published": "2020-06-07T21:12:05.91246-07:00",
"url": "https://v2.jacky.wtf/post/ecb0ee9d-c6f4-40a0-92a1-3adea2c9c383",
"content": {
"text": "Ugh, finally fixed my Webmentions and from my recordings, it\u2019s even faster than before. This is so exciting.",
"html": "<p>Ugh, finally fixed my Webmentions and from my recordings, it\u2019s even faster than before. This is so exciting.</p>"
},
"author": {
"type": "card",
"name": "",
"url": "https://v2.jacky.wtf",
"photo": null
},
"post-type": "note",
"_id": "12253753",
"_source": "1886",
"_is_read": true
}
Personal website owners – what do you think about collecting all of the feeds you are producing in one way or the other on a /feeds
page?
Sounds like a good idea! I’ll get on that.
{
"type": "entry",
"published": "2020-06-03T17:05:54Z",
"url": "https://adactio.com/links/16974",
"category": [
"rss",
"feeds",
"indieweb",
"blogging",
"publishing",
"syndication",
"writing",
"updates",
"urls",
"discovery"
],
"bookmark-of": [
"https://marcus.io/blog/making-rss-more-visible-again-with-slash-feeds"
],
"content": {
"text": "marcus.io \u00b7 Making RSS more visible again with a /feeds page\n\n\n\n\n Personal website owners \u2013 what do you think about collecting all of the feeds you are producing in one way or the other on a /feeds page?\n\n\nSounds like a good idea! I\u2019ll get on that.",
"html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://marcus.io/blog/making-rss-more-visible-again-with-slash-feeds\">\nmarcus.io \u00b7 Making RSS more visible again with a /feeds page\n</a>\n</h3>\n\n<blockquote>\n <p>Personal website owners \u2013 what do you think about collecting all of the feeds you are producing in one way or the other on a <code>/feeds</code> page?</p>\n</blockquote>\n\n<p>Sounds like a good idea! I\u2019ll get on that.</p>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://adactio.com/images/photo-150.jpg"
},
"post-type": "bookmark",
"_id": "12129814",
"_source": "2",
"_is_read": true
}
{
"type": "entry",
"published": "2020-06-01T20:42:00+01:00",
"url": "https://www.jvt.me/mf2/2020/06/fa1ve/",
"category": [
"web",
"indieweb",
"webmention",
"personal-website"
],
"bookmark-of": [
"https://petermolnar.net/article/less-features-cleaner-site/"
],
"author": {
"type": "card",
"name": "Jamie Tanna",
"url": "https://www.jvt.me",
"photo": "https://www.jvt.me/img/profile.png"
},
"post-type": "bookmark",
"_id": "12070097",
"_source": "2169",
"_is_read": true
}
{
"type": "entry",
"author": {
"name": null,
"url": "https://petermolnar.net/",
"photo": null
},
"url": "https://petermolnar.net/article/less-features-cleaner-site/",
"published": "2020-06-01T08:30:00+01:00",
"content": {
"html": "<p>A few weeks ago I sat down in front of my site and realized: it's doing too many things, the code is over 3500 lines of Python, and I feel lost when I look at it. It was an organic growth, and happened somewhat like this:</p>\n<p><em>Let's start simple: collect images, extract EXIF using exiftool<a href=\"https://petermolnar.net/#fn1\">1</a>, watermark them, if needed, resize them, if needed. Collect markdown files, convert them to HTML with pandoc<a href=\"https://petermolnar.net/#fn2\">2</a> using microformat friendly templates. Ah, wait I need categories. I also need pages. And feeds. Multiple feeds, because I'm not going to choose sides, RSS, Atom, JSON, hfeed. Let's make all of them! I'll even invent YAMLFeed<a href=\"https://petermolnar.net/#fn3\">3</a> for the lulz. I need webmentions. Receive them, create comments before rendering anything, then render, then sync, then send outgoing webmentions. Oh. Don't send them every single time, just on change. I need to publish to flickr, but I need to be able to backfill from brid.gy. Let's handle gone content properly, also redirects nicely. Let's try JavaScript based search. Or let's not, it's needs people to download the full index every time, let's do PHP from Python templates instead. Google zombies are doing JSON-LD, academics are doing Linked Data, let's do all that. Hell, let's make an intermediate representation of all my content in JSON-LD that is made from the Markdown files before it hit's the HTML templates! In the meanwhile, why not auto-save my posts to archive.org? But what if I already did it? Let's find the earliest version automagically! OK, this is a bit slow now, let's start using async stuff. Let's syndicate to fediverse via fed.brid.gy. I don't like my pagination logic, let's do some categories flat: all on one page; others paginated by year. I want to add something funky for IWC this year, I'll add a worldmap for photos with location data. I see federated things are pinging <code>.well-known</code> locations, let's generate data for them.</em></p>\n<p>I'm not certain if this is the whole list of features, but it's quite clear it has overgrown it's original purpose. In my defense, some of these functionalities were only meant to be learning experiences.</p>\n<h2>DRY - don't repeat yourself</h2>\n<p>I started with the most painful point. The previous iteration had a <code>source</code> directory for the content, with the unprocessed, original files, a <code>nasg</code> for the code, and a <code>www</code> for the generated output. What I should have done from the start it to have 1 and only 1 directory for everything.</p>\n<p>The main reasons for the original layout were to keep my original - quite large - images safe on my own computer, copy only the resized and potentially watermarked ones online. The other was to keep the code in it's own repository, so it can be \"Open Sourced\". <em>Why the quotes: because I've started to question what Open Source means to me and what it is right now in the world, but this is for another day.</em></p>\n<p>The more I complicated this the more I realized all these disconnected pieces are making the originally simple process more and more convoluted. So I made certain decisions.</p>\n<p><strong>My generator code is not going to live on Github any more. Instead, it'll be in the root folder of my site content, which will also be the root folder for the website. I'll generate everything in place.</strong> I'll move the original images to be hidden files and protect them via webserver rules, like I did in the WordPress times. I'll place the Python virtualenv in this directory as well.</p>\n<p>With the move to a single directory structure I also moved away from the weird path system I ended up with: direct uris for entries and /category/ prefixes for categories. Now everything always is /folder/subfolder/ etc, as it should have been from the start.</p>\n<p>It needed some rewrite magic to have it done properly, but it should all be fine now.</p>\n<h2>Parsing should be stiff and intolerant</h2>\n<p>When I saved markdown files by hand, I wasn't paying too much attention to, for example, dates. The Python library I used - arrow - parsed nearly everything. This also applied to the comments, but the comments were saved by my own code: missing or <code>null</code> authors, bad date formats, etc.</p>\n<p>With the refactoring I decided to ditch as many libraries as possible in favour of Python's built in ones, and <code>datetime</code> suddenly wasn't happy.</p>\n<p>I fixed all of them; some with scripts, others by hand. Than swapped to a very strict parsing: if stuff is malformed, fail hard. Make me have to fix it.</p>\n<p><strong>No workarounds in the code, no clever hundreds of lines of fallbacks; the source should be cleaned if there is an issue.</strong></p>\n<h2>Not everything needs templating</h2>\n<p>In order to have a nice search, I had templated PHP files. Truth is: it's not essential. Search is happy with a few lines of CSS and a \"back to petermolnar.net\" button.</p>\n<p>My fallback 404.php can now rely on looking up files itself. Previously I had <code>removeduri.del</code> and <code>some-old-uri.url</code> files. The first were empty files, with the deleted URIs in their names; the second contained the URL to redirect to. Because of the <code>content</code> and <code>www</code> directory setup, I had to parse these, collect them, and then insert in the PHP. But now I had the files accessible from the PHP itself, meaning it can look it up itself.</p>\n<p><strong>This way both my <code>404.php</code> and my <code>search.php</code> became self-sufficient - no more Python Jinja2 templates for PHP files.</strong></p>\n<h2>Semantic HTML5 is a joke, JSON-LD is a monster, and I have no need for either</h2>\n<p>Some elements in HTML5 are good, and were much needed. Personally I'm very happy with <code>figure</code> and <code>figcaption</code>, <code>details</code> and <code>summary</code>, and <code>time</code>.</p>\n<p>I find<code>header</code>, <code>footer</code> , and <code>nav</code> a bit useless, but nothing tops the <code>main</code>, <code>section</code>, <code>article</code> (and probably some other) mess. There's no definitive way of using one or the other, so everyone is doing which make sense to them<a href=\"https://petermolnar.net/#fn4\">4</a> - which is the opposite of a standard. Try to figure out which definition goes for which (official definitions from the \"living\" HTML standard):</p>\n<blockquote>\n<p>The X element represents a generic section of a document or application. The X , in this context, is a thematic grouping of content, typically with a heading.</p>\n</blockquote>\n<blockquote>\n<p>The Y element represents a complete, or self-contained, composition in a document, page, application, or site and that is, in principle, independently distributable or reusable, e.g. in syndication.</p>\n</blockquote>\n<blockquote>\n<p>The Z element represents the dominant contents of the document.</p>\n</blockquote>\n<p>So I dropped most of it; especially because I have microformats<a href=\"https://petermolnar.net/#fn5\">5</a> v1 and v2 markup already, and that is an actual standard with obvious guidelines.</p>\n<p>Next ripe for reaping was JSON-LD. I got into the semantic web possibilities because I was curios. I learnt a lot, including the fact that I have no need for it.</p>\n<p>The enforced vocabulary for JSON-LD, schema.org, is terrible to use. Whenever you have a need for something that's not present already, you're done for, and it'll probably pollute the structured data results, because all the search engines, especially Google, are picky: they limit the options plus they require properties. Examples everything MUST have a photo! And and address! And a publisher! If you don't believe me, try to make a resume with schema.org then see how opinion of the Google Structured Data Testing Tool about it.</p>\n<p>No, Google. Not everything has an image - see <a href=\"http://textfiles.com/\">http://textfiles.com</a> Like it or not, a website doesn't need and address. The list goes on forever.</p>\n<p><strong>I'm going to stop feeding it, stop feeding all of them, stop playing by their weird rules. HTML has <code>link</code> and <code>meta</code> elements, plus <code>rel=</code> property, so it can already represent the minimum, which is enough. Plus, again, there's microformats, and Google is still very happy with them<a href=\"https://petermolnar.net/#fn6\">6</a>.</strong></p>\n<p>Note: with structured data, in theory, one could pull in other vocabularies to overcome problems like nonexistent properties in one, but search engines are not real RDF parsers. Unless you're writing for academic publishing tools that will do so, don't bother.</p>\n<h2>Pick your format, and pick just one</h2>\n<p>Between 2003 and 2007 some tragic mud-throwing (<em>mirror translated Hungarian phrase, just because it's pretty visual</em>) was going on on the web, over something ridiculously small: my XML is better, than your XML! <a href=\"https://petermolnar.net/#fn7\">7</a>.</p>\n<p>When I first encountered with the whole \"feed\" idea itself, there was only RSS, and for a very long time, I was happy with it. Then I read opinions of people I listen to on how Atom is better. <a href=\"https://fed.brid.gy/\">https://fed.brid.gy</a> is Atom only. Much later someone on the internet popped the JSONFeed thought.</p>\n<p><em>When I first saw JSONFeed, I thought it's a joke. Turned out it's not, because there are simpletons who honestly believe the world will be better if things are JSON and not XML. It won't, it'll only result in things like JSON-LD</em></p>\n<p><em>In the heat of the moment, I coined the thought of YAMLFeed<a href=\"https://petermolnar.net/#fn8\">8</a>, strictly as a satire, but for a brief time I actually maintained a YAMLFeed file as well Do not follow my example.</em></p>\n<p>And then I found myself serving them all. I had a <code>Category</code> class in Python, that had <code>JSONFeed</code> and <code>XMLFeed</code>subclasses, which latter had <code>AtomFeed</code> and <code>RSSFeed</code> subclasses, it used <code>FeedParser</code> to deal with it, and so on... in short, I made a monster.</p>\n<p><strong>I went back an RSS 2.0 feed and a h-feed.</strong> The first can be made with the <code>lxml</code> library directly, and I always liked the RSS acronym.</p>\n<h2>Closure</h2>\n<p>If you have a website in 2020, it's probably a hobby for you as well; don't let anything change that.</p>\n<p>It should never become a burden, any part of it. It did for me, and I seriously considered firing up something like Microsoft FrontPage 98 to start from the proverbial scratch, but managed to salvage it before resulting to drastic measures.</p>\n<p>Don't follow trends. Once a solution grows deep enough roots - microformats, RSS, etc - it'll be around for a very long time.</p>\n<p>Screw SEO. If you're like me, and you write for yourself, and, maybe, for the small web<a href=\"https://petermolnar.net/#fn9\">9</a>, don't bother trying to please an ever-changing power play.</p>\n<p>If you want to learn something new, be careful not to embed it too deep - it may be a fast fading idea.</p>\n\n\n<ol><li><p><a href=\"https://exiftool.org/\">https://exiftool.org/</a><a href=\"https://petermolnar.net/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://pandoc.org/\">https://pandoc.org/</a><a href=\"https://petermolnar.net/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/YAMLFeed\">https://indieweb.org/YAMLFeed</a><a href=\"https://petermolnar.net/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"https://www.w3schools.com/html/html5_semantic_elements.asp\">https://www.w3schools.com/html/html5_semantic_elements.asp</a><a href=\"https://petermolnar.net/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"http://microformats.org/\">http://microformats.org/</a><a href=\"https://petermolnar.net/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"https://aaronparecki.com/2016/12/17/8/owning-my-reviews#historical-recommendations\">https://aaronparecki.com/2016/12/17/8/owning-my-reviews#historical-recommendations</a><a href=\"https://petermolnar.net/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/RSS_Atom_wars\">https://indieweb.org/RSS_Atom_wars</a><a href=\"https://petermolnar.net/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/YAMLFeed\">https://indieweb.org/YAMLFeed</a><a href=\"https://petermolnar.net/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"https://neustadt.fr/essays/the-small-web/\">https://neustadt.fr/essays/the-small-web/</a><a href=\"https://petermolnar.net/#fnref9\">\u21a9</a></p></li>\n</ol>",
"text": "A few weeks ago I sat down in front of my site and realized: it's doing too many things, the code is over 3500 lines of Python, and I feel lost when I look at it. It was an organic growth, and happened somewhat like this:\nLet's start simple: collect images, extract EXIF using exiftool1, watermark them, if needed, resize them, if needed. Collect markdown files, convert them to HTML with pandoc2 using microformat friendly templates. Ah, wait I need categories. I also need pages. And feeds. Multiple feeds, because I'm not going to choose sides, RSS, Atom, JSON, hfeed. Let's make all of them! I'll even invent YAMLFeed3 for the lulz. I need webmentions. Receive them, create comments before rendering anything, then render, then sync, then send outgoing webmentions. Oh. Don't send them every single time, just on change. I need to publish to flickr, but I need to be able to backfill from brid.gy. Let's handle gone content properly, also redirects nicely. Let's try JavaScript based search. Or let's not, it's needs people to download the full index every time, let's do PHP from Python templates instead. Google zombies are doing JSON-LD, academics are doing Linked Data, let's do all that. Hell, let's make an intermediate representation of all my content in JSON-LD that is made from the Markdown files before it hit's the HTML templates! In the meanwhile, why not auto-save my posts to archive.org? But what if I already did it? Let's find the earliest version automagically! OK, this is a bit slow now, let's start using async stuff. Let's syndicate to fediverse via fed.brid.gy. I don't like my pagination logic, let's do some categories flat: all on one page; others paginated by year. I want to add something funky for IWC this year, I'll add a worldmap for photos with location data. I see federated things are pinging .well-known locations, let's generate data for them.\nI'm not certain if this is the whole list of features, but it's quite clear it has overgrown it's original purpose. In my defense, some of these functionalities were only meant to be learning experiences.\nDRY - don't repeat yourself\nI started with the most painful point. The previous iteration had a source directory for the content, with the unprocessed, original files, a nasg for the code, and a www for the generated output. What I should have done from the start it to have 1 and only 1 directory for everything.\nThe main reasons for the original layout were to keep my original - quite large - images safe on my own computer, copy only the resized and potentially watermarked ones online. The other was to keep the code in it's own repository, so it can be \"Open Sourced\". Why the quotes: because I've started to question what Open Source means to me and what it is right now in the world, but this is for another day.\nThe more I complicated this the more I realized all these disconnected pieces are making the originally simple process more and more convoluted. So I made certain decisions.\nMy generator code is not going to live on Github any more. Instead, it'll be in the root folder of my site content, which will also be the root folder for the website. I'll generate everything in place. I'll move the original images to be hidden files and protect them via webserver rules, like I did in the WordPress times. I'll place the Python virtualenv in this directory as well.\nWith the move to a single directory structure I also moved away from the weird path system I ended up with: direct uris for entries and /category/ prefixes for categories. Now everything always is /folder/subfolder/ etc, as it should have been from the start.\nIt needed some rewrite magic to have it done properly, but it should all be fine now.\nParsing should be stiff and intolerant\nWhen I saved markdown files by hand, I wasn't paying too much attention to, for example, dates. The Python library I used - arrow - parsed nearly everything. This also applied to the comments, but the comments were saved by my own code: missing or null authors, bad date formats, etc.\nWith the refactoring I decided to ditch as many libraries as possible in favour of Python's built in ones, and datetime suddenly wasn't happy.\nI fixed all of them; some with scripts, others by hand. Than swapped to a very strict parsing: if stuff is malformed, fail hard. Make me have to fix it.\nNo workarounds in the code, no clever hundreds of lines of fallbacks; the source should be cleaned if there is an issue.\nNot everything needs templating\nIn order to have a nice search, I had templated PHP files. Truth is: it's not essential. Search is happy with a few lines of CSS and a \"back to petermolnar.net\" button.\nMy fallback 404.php can now rely on looking up files itself. Previously I had removeduri.del and some-old-uri.url files. The first were empty files, with the deleted URIs in their names; the second contained the URL to redirect to. Because of the content and www directory setup, I had to parse these, collect them, and then insert in the PHP. But now I had the files accessible from the PHP itself, meaning it can look it up itself.\nThis way both my 404.php and my search.php became self-sufficient - no more Python Jinja2 templates for PHP files.\nSemantic HTML5 is a joke, JSON-LD is a monster, and I have no need for either\nSome elements in HTML5 are good, and were much needed. Personally I'm very happy with figure and figcaption, details and summary, and time.\nI findheader, footer , and nav a bit useless, but nothing tops the main, section, article (and probably some other) mess. There's no definitive way of using one or the other, so everyone is doing which make sense to them4 - which is the opposite of a standard. Try to figure out which definition goes for which (official definitions from the \"living\" HTML standard):\n\nThe X element represents a generic section of a document or application. The X , in this context, is a thematic grouping of content, typically with a heading.\n\n\nThe Y element represents a complete, or self-contained, composition in a document, page, application, or site and that is, in principle, independently distributable or reusable, e.g. in syndication.\n\n\nThe Z element represents the dominant contents of the document.\n\nSo I dropped most of it; especially because I have microformats5 v1 and v2 markup already, and that is an actual standard with obvious guidelines.\nNext ripe for reaping was JSON-LD. I got into the semantic web possibilities because I was curios. I learnt a lot, including the fact that I have no need for it.\nThe enforced vocabulary for JSON-LD, schema.org, is terrible to use. Whenever you have a need for something that's not present already, you're done for, and it'll probably pollute the structured data results, because all the search engines, especially Google, are picky: they limit the options plus they require properties. Examples everything MUST have a photo! And and address! And a publisher! If you don't believe me, try to make a resume with schema.org then see how opinion of the Google Structured Data Testing Tool about it.\nNo, Google. Not everything has an image - see http://textfiles.com Like it or not, a website doesn't need and address. The list goes on forever.\nI'm going to stop feeding it, stop feeding all of them, stop playing by their weird rules. HTML has link and meta elements, plus rel= property, so it can already represent the minimum, which is enough. Plus, again, there's microformats, and Google is still very happy with them6.\nNote: with structured data, in theory, one could pull in other vocabularies to overcome problems like nonexistent properties in one, but search engines are not real RDF parsers. Unless you're writing for academic publishing tools that will do so, don't bother.\nPick your format, and pick just one\nBetween 2003 and 2007 some tragic mud-throwing (mirror translated Hungarian phrase, just because it's pretty visual) was going on on the web, over something ridiculously small: my XML is better, than your XML! 7.\nWhen I first encountered with the whole \"feed\" idea itself, there was only RSS, and for a very long time, I was happy with it. Then I read opinions of people I listen to on how Atom is better. https://fed.brid.gy is Atom only. Much later someone on the internet popped the JSONFeed thought.\nWhen I first saw JSONFeed, I thought it's a joke. Turned out it's not, because there are simpletons who honestly believe the world will be better if things are JSON and not XML. It won't, it'll only result in things like JSON-LD\nIn the heat of the moment, I coined the thought of YAMLFeed8, strictly as a satire, but for a brief time I actually maintained a YAMLFeed file as well Do not follow my example.\nAnd then I found myself serving them all. I had a Category class in Python, that had JSONFeed and XMLFeedsubclasses, which latter had AtomFeed and RSSFeed subclasses, it used FeedParser to deal with it, and so on... in short, I made a monster.\nI went back an RSS 2.0 feed and a h-feed. The first can be made with the lxml library directly, and I always liked the RSS acronym.\nClosure\nIf you have a website in 2020, it's probably a hobby for you as well; don't let anything change that.\nIt should never become a burden, any part of it. It did for me, and I seriously considered firing up something like Microsoft FrontPage 98 to start from the proverbial scratch, but managed to salvage it before resulting to drastic measures.\nDon't follow trends. Once a solution grows deep enough roots - microformats, RSS, etc - it'll be around for a very long time.\nScrew SEO. If you're like me, and you write for yourself, and, maybe, for the small web9, don't bother trying to please an ever-changing power play.\nIf you want to learn something new, be careful not to embed it too deep - it may be a fast fading idea.\n\n\nhttps://exiftool.org/\u21a9\nhttps://pandoc.org/\u21a9\nhttps://indieweb.org/YAMLFeed\u21a9\nhttps://www.w3schools.com/html/html5_semantic_elements.asp\u21a9\nhttp://microformats.org/\u21a9\nhttps://aaronparecki.com/2016/12/17/8/owning-my-reviews#historical-recommendations\u21a9\nhttps://indieweb.org/RSS_Atom_wars\u21a9\nhttps://indieweb.org/YAMLFeed\u21a9\nhttps://neustadt.fr/essays/the-small-web/\u21a9"
},
"name": "Refactoring my static generator",
"post-type": "article",
"_id": "12055234",
"_source": "268",
"_is_read": true
}