Interesting start to the morning at IndieWebCamp Amsterdam - we've spoken about accessibility of the Web and IndieWeb, and about how private posts and privacy should work
{
"type": "entry",
"published": "2019-09-28T13:03:00+0200",
"url": "https://www.jvt.me/mf2/ca21ff57-0265-4f3a-ad17-6f61e4d5f42c/",
"content": {
"text": "Interesting start to the morning at IndieWebCamp Amsterdam - we've spoken about accessibility of the Web and IndieWeb, and about how private posts and privacy should work"
},
"author": {
"type": "card",
"name": "Jamie Tanna",
"url": "https://www.jvt.me",
"photo": "https://aperture-proxy.p3k.io/f4cac242182744deb91a5ee91d7528d78e657269/68747470733a2f2f7777772e6a76742e6d652f696d672f70726f66696c652e706e67"
},
"post-type": "note",
"_id": "5448469",
"_source": "2169",
"_is_read": true
}
I'm really enjoying the intros at IndieWebCamp Amsterdam. Its nice to see the range of websites, the technology usages, and that some folks are posting while they're talking while others haven't touched their sites in years. It's an exciting chance to get reinvigorated!
{
"type": "entry",
"published": "2019-09-28T10:45:00+0200",
"url": "https://www.jvt.me/mf2/d9dd659f-780d-4445-a4b6-8566daf486e2/",
"content": {
"text": "I'm really enjoying the intros at IndieWebCamp Amsterdam. Its nice to see the range of websites, the technology usages, and that some folks are posting while they're talking while others haven't touched their sites in years. It's an exciting chance to get reinvigorated!"
},
"author": {
"type": "card",
"name": "Jamie Tanna",
"url": "https://www.jvt.me",
"photo": "https://aperture-proxy.p3k.io/f4cac242182744deb91a5ee91d7528d78e657269/68747470733a2f2f7777772e6a76742e6d652f696d672f70726f66696c652e706e67"
},
"post-type": "note",
"_id": "5447955",
"_source": "2169",
"_is_read": true
}
Kicking off @IndieWebCamp Amsterdam with @ton_zylstra giving an intro to what is the #IndieWeb!
{
"type": "entry",
"published": "2019-09-28T10:13:55+02:00",
"url": "https://aaronparecki.com/2019/09/28/13/iwc",
"category": [
"IndieWeb",
"indieweb"
],
"photo": [
"https://aperture-media.p3k.io/aaronparecki.com/9994370d53863af926edb3b153f5c34502c4a995f45c7665fc967891783292b8.jpg"
],
"syndication": [
"https://twitter.com/aaronpk/status/1177858974701379584"
],
"content": {
"text": "Kicking off @IndieWebCamp Amsterdam with @ton_zylstra giving an intro to what is the #IndieWeb!",
"html": "Kicking off <a href=\"https://indieweb.org/IndieWebCamps\">@IndieWebCamp</a> Amsterdam with <a href=\"https://twitter.com/ton_zylstra\">@ton_zylstra</a> giving an intro to what is the <a href=\"https://aaronparecki.com/tag/indieweb\">#<span class=\"p-category\">IndieWeb</span></a>!"
},
"author": {
"type": "card",
"name": "Aaron Parecki",
"url": "https://aaronparecki.com/",
"photo": "https://aperture-media.p3k.io/aaronparecki.com/41061f9de825966faa22e9c42830e1d4a614a321213b4575b9488aa93f89817a.jpg"
},
"post-type": "photo",
"_id": "5447315",
"_source": "16",
"_is_read": true
}
En route to my first IndieWebCamp (Amsterdam) after a great couple of days at DevOpsDays London. I'm really looking forward to meeting some folks and talking about owning more of my little corner of the Web, and meeting the faces behind the websites I frequent!
{
"type": "entry",
"published": "2019-09-27T20:43:00+0200",
"url": "https://www.jvt.me/mf2/9a3fc553-b278-4262-877b-774a91f58edf/",
"content": {
"text": "En route to my first IndieWebCamp (Amsterdam) after a great couple of days at DevOpsDays London. I'm really looking forward to meeting some folks and talking about owning more of my little corner of the Web, and meeting the faces behind the websites I frequent!"
},
"author": {
"type": "card",
"name": "Jamie Tanna",
"url": "https://www.jvt.me",
"photo": "https://aperture-proxy.p3k.io/f4cac242182744deb91a5ee91d7528d78e657269/68747470733a2f2f7777772e6a76742e6d652f696d672f70726f66696c652e706e67"
},
"post-type": "note",
"_id": "5440809",
"_source": "2169",
"_is_read": true
}
#IndieWebCamp Amsterdam this weekend!
Sat 28/9 unconference sessions on all things #IndieWeb
Sun 29/9 building your own next step on the IndieWeb.
Register and get more info at indieweb.org/2019/Amsterdam
{
"type": "entry",
"author": {
"name": "Neil Mather",
"url": "https://doubleloop.net/",
"photo": null
},
"url": "https://doubleloop.net/2019/09/26/6122/",
"published": "2019-09-26T22:26:00+00:00",
"content": {
"html": "#IndieWebCamp Amsterdam this weekend! \n<p>Sat 28/9 unconference sessions on all things <a href=\"https://doubleloop.net/tag/indieweb/\">#IndieWeb</a><br />Sun 29/9 building your own next step on the IndieWeb. </p>\n<p>Register and get more info at <a href=\"https://indieweb.org/2019/Amsterdam%E2%80%8B\">indieweb.org/2019/Amsterdam\u200b</a></p>",
"text": "#IndieWebCamp Amsterdam this weekend! \nSat 28/9 unconference sessions on all things #IndieWeb\nSun 29/9 building your own next step on the IndieWeb. \nRegister and get more info at indieweb.org/2019/Amsterdam\u200b"
},
"post-type": "note",
"_id": "5428163",
"_source": "1895",
"_is_read": true
}
On my way to Amsterdam for @IndieWebCamp and @ViewSourceConf!
This 8 hour flight seems like nothing in comparison to last week's 15 hours to Australia!
{
"type": "entry",
"published": "2019-09-25T14:16:21-05:00",
"url": "https://aaronparecki.com/2019/09/25/23/ams",
"category": [
"indiewebcamp"
],
"syndication": [
"https://twitter.com/aaronpk/status/1176938513071104000"
],
"content": {
"text": "On my way to Amsterdam for @IndieWebCamp and @ViewSourceConf! \n\nThis 8 hour flight seems like nothing in comparison to last week's 15 hours to Australia!",
"html": "On my way to Amsterdam for <a href=\"https://indieweb.org/IndieWebCamps\">@IndieWebCamp</a> and <a href=\"https://twitter.com/ViewSourceConf\">@ViewSourceConf</a>! <br /><br />This 8 hour flight seems like nothing in comparison to last week's 15 hours to Australia!"
},
"author": {
"type": "card",
"name": "Aaron Parecki",
"url": "https://aaronparecki.com/",
"photo": "https://aperture-media.p3k.io/aaronparecki.com/41061f9de825966faa22e9c42830e1d4a614a321213b4575b9488aa93f89817a.jpg"
},
"post-type": "note",
"_id": "5410431",
"_source": "16",
"_is_read": true
}
I have written a post so long my Micropub endpoint rejects it for no reason in particular.
{
"type": "entry",
"published": "2019-09-22T14:16:14+00:00",
"url": "https://fireburn.ru/posts/1569150974",
"content": {
"text": "I have written a post so long my Micropub endpoint rejects it for no reason in particular."
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "note",
"_id": "5363073",
"_source": "1371",
"_is_read": true
}
{
"type": "entry",
"published": "2019-09-21T15:36:14Z",
"url": "https://adactio.com/journal/15844",
"category": [
"goingoffline",
"serviceworkers",
"cache",
"caching",
"microformats",
"hentry",
"indieweb",
"javascript",
"code",
"async",
"await",
"frontend",
"development"
],
"syndication": [
"https://medium.com/@adactio/19f3f0125cbe"
],
"name": "Going offline with microformats",
"content": {
"text": "For the offline page on my website, I\u2019ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page\u2014title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.\n\nIt all worked fine, but as soon as I read Remy\u2019s post about the forehead-slappingly brilliant technique he\u2019s using, I knew I\u2019d be switching my code over. Instead of using localStorage\u2014or any other browser API\u2014to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you\u2019ve stored, and get at whatever information you need:\n\n\n I realised I didn\u2019t need to store anything. HTML is the API.\n\n\nRefactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency\u2014localStorage\u2014and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.\n\nMany years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata\u2014data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it\u2019s invisible\u2014out of sight and out of mind.\n\nIn fact, that\u2019s always been at the heart of one of the core principles behind microformats. Instead of duplicating information\u2014once as data and again as metadata\u2014repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.\n\nSo if you have a person\u2019s contact details on a web page, rather than repeating that information somewhere else\u2014in the head of the document, say\u2014you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that\u2019s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.\n\nHere on my website, I\u2019ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say \u201cthis is the title\u201d, \u201cthis is the content\u201d, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else\u2019s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.\n\nWhen I read Remy\u2019s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn\u2019t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.\n\nThe markup for my offline page looks like this:\n\n<h1>Offline</h1>\n<p>Sorry.\u00a0It\u00a0looks\u00a0like\u00a0the\u00a0network\u00a0connection\u00a0isn\u2019t\u00a0working\u00a0right\u00a0now.</p>\n<div\u00a0id=\"history\">\n</div>\n\n\nI\u2019ll populate that \u201chistory\u201d div with information from a cache called \u201cpages\u201d that I\u2019ve created using the Cache API in my service worker.\n\nI\u2019m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. \u201cOpen this cache, then get the keys of that cache, then loop through the pages, then\u2026\u201d All of those thens would lead to some serious indentation without async/await.\n\nAll async functions have to have a name\u2014no anonymous async functions allowed. I\u2019m calling this one listPages, just like Remy is doing. I\u2019m making the listPages function execute immediately:\n\n\n(async\u00a0function\u00a0listPages()\u00a0{\n...\n})();\n\n\nNow for the code to go inside that immediately-invoked function.\n\nI create an array called browsingHistory that I\u2019ll populate with the data I\u2019ll use for that \u201chistory\u201d div.\n\n\nconst browsingHistory = [];\n\n\nI\u2019m going to be parsing web pages later on, so I\u2019m going to need a DOM parser. I give it the imaginative name of \u2026parser.\n\n\nconst parser = new DOMParser();\n\n\nTime to open up my \u201cpages\u201d cache. This is the first await statement. When the cache is opened, this promise will resolve and I\u2019ll have access to this cache using the variable \u2026cache (again with the imaginative naming).\n\n\nconst cache = await caches.open('pages');\n\n\nNow I get the keys of the cache\u2014that\u2019s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I\u2019ll have a variable that\u2019s got a list of all those pages. You\u2019ll never guess what I\u2019m calling the variable that stores the keys of the cache. That\u2019s right \u2026keys!\n\n\nconst keys = await cache.keys();\n\n\nTime to get looping. I\u2019m getting each request in the list of keys using a for/of loop:\n\n\nfor (const request of keys) {\n..\n}\n\n\nInside the loop, I pull the page out of the cache using the match() method of the Cache API. I\u2019ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.\n\n\nconst response = await cache.match(request);\n\n\nI\u2019m not interested in the headers of the response. I\u2019m specifically looking for the HTML itself. I can get at that using the text() method. Again, it\u2019s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I\u2019ll have a variable called html that contains the body of the response.\n\n\nconst html = await response.text();\n\n\nNow I can use that DOM parser I created earlier. I\u2019ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn\u2019t asynchronous so there\u2019s no need for the await keyword.\n\n\nconst dom = parser.parseFromString(html, 'text/html');\n\n\nNow I\u2019ve got a DOM, which I have creatively stored in a variable called \u2026dom.\n\nI can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value \u201ch-entry\u201d:\n\n\nif (dom.querySelector('.h-entry h1.p-name') {\n...\n}\n\n\nIn this particular case, I\u2019m also checking to see if the h1 element of the page is the title of the h-entry. That\u2019s so that index pages (like my home page) won\u2019t get past this if statement.\n\nInside the if statement, I\u2019m going to store the data I retrieve from the DOM. I\u2019ll save the data into an object called \u2026data!\n\n\nconst data = new Object;\n\n\nWell, the first piece of data isn\u2019t actually in the markup: it\u2019s the URL of the page. I can get that from the request variable in my for loop.\n\n\ndata.url = request.url;\n\n\nI\u2019m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.\n\n\ndata.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));\n\n\nWhile I\u2019m at it, I\u2019m going to grab the human-readable date from the innerText property of that same time.dt-published element.\n\n\ndata.published = dom.querySelector('.h-entry .dt-published').innerText;\n\n\nThe title of the h-entry is in the innerText of the element with a class of p-name.\n\n\ndata.title = dom.querySelector('.h-entry .p-name').innerText;\n\n\nAt this point, I am actually going to use some metacrap instead of the visible h-entry content. I don\u2019t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I\u2019ll grab that now.\n\n\ndata.description = dom.querySelector('meta[name=\"description\"]').getAttribute('content');\n\n\nAlright. I\u2019ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I\u2019ll stick all of that data into my browsingHistory array.\n\n\nbrowsingHistory.push(data);\n\n\nMy if statement and my for/in loop are finished at this point. Here\u2019s how the whole loop looks:\n\nfor (const request of keys) {\n const response = await cache.match(request);\n const html = await response.text();\n const dom = parser.parseFromString(html, 'text/html');\n if (dom.querySelector('.h-entry h1.p-name')) {\n const data = new Object;\n data.url = request.url;\n data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));\n data.published = dom.querySelector('.h-entry .dt-published').innerText;\n data.title = dom.querySelector('.h-entry .p-name').innerText;\n data.description = dom.querySelector('meta[name=\"description\"]').getAttribute('content');\n browsingHistory.push(data);\n }\n}\n\n\nThat\u2019s the data collection part of the code. Now I\u2019m going to take all that yummy information an output it onto the page.\n\nFirst of all, I want to make sure that the browsingHistory array isn\u2019t empty. There\u2019s no point going any further if it is.\n\n\nif (browsingHistory) {\n...\n}\n\n\nWithin this if statement, I can do what I want with the data I\u2019ve put into the browsingHistory array.\n\nI\u2019m going to arrange the data by date published. I\u2019m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here\u2019s how I sort the browsingHistory array according to the timestamp property of each item within it:\n\n\nbrowsingHistory.sort(\u00a0(a,b)\u00a0=>\u00a0{\n return\u00a0b.timestamp\u00a0-\u00a0a.timestamp;\n});\n\n\nNow I\u2019m going to concatenate some strings. This is the string of HTML text that will eventually be put into the \u201chistory\u201d div. I\u2019m storing the markup in a string called \u2026markup (my imagination knows no bounds).\n\n\nlet\u00a0markup\u00a0=\u00a0'<p>But\u00a0you\u00a0still\u00a0have\u00a0something\u00a0to\u00a0read:</p>';\n\n\nI\u2019m going to add a chunk of markup for each item of data.\n\nbrowsingHistory.forEach(\u00a0data\u00a0=>\u00a0{\n markup\u00a0+=\u00a0`\n<h2><a\u00a0href=\"${\u00a0data.url\u00a0}\">${\u00a0data.title\u00a0}</a></h2>\n<p>${\u00a0data.description\u00a0}</p>\n<p class=\"meta\">${\u00a0data.published\u00a0}</p>\n`;\n});\n\n\nWith my markup assembled, I can now insert it into the \u201chistory\u201d part of my offline page. I\u2019m using the handy insertAdjacentHTML() method to do this.\n\n\ndocument.getElementById('history').insertAdjacentHTML('beforeend',\u00a0markup);\n\n\nHere\u2019s what my finished JavaScript looks like:\n\n<script>\n(async function listPages() {\n const browsingHistory = [];\n const parser = new DOMParser();\n const cache = await caches.open('pages');\n const keys = await cache.keys();\n for (const request of keys) {\n const response = await cache.match(request);\n const html = await response.text();\n const dom = parser.parseFromString(html, 'text/html');\n if (dom.querySelector('.h-entry h1.p-name')) {\n const data = new Object;\n data.url = request.url;\n data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));\n data.published = dom.querySelector('.h-entry .dt-published').innerText;\n data.title = dom.querySelector('.h-entry .p-name').innerText;\n data.description = dom.querySelector('meta[name=\"description\"]').getAttribute('content');\n browsingHistory.push(data);\n }\n }\n if (browsingHistory) {\n browsingHistory.sort( (a,b) => {\n return b.timestamp - a.timestamp;\n });\n let markup = '<p>But you still have something to read:</p>';\n browsingHistory.forEach( data => {\n markup += `\n<h2><a href=\"${ data.url }\">${ data.title }</a></h2>\n<p>${ data.description }</p>\n<p class=\"meta\">${ data.published }</p>\n`;\n });\n document.getElementById('history').insertAdjacentHTML('beforeend', markup);\n }\n})();\n</script>\n\n\nI\u2019m pretty happy with that. It\u2019s not too long but it\u2019s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.\n\nIf you\u2019ve got an offline strategy for your website, and you\u2019re using h-entry to mark up your content, feel free to use that code.\n\nIf you don\u2019t have an offline strategy for your website, there\u2019s a book for that.",
"html": "<p>For the offline page on my website, I\u2019ve been using a mixture of the Cache API and the <code>localStorage</code> API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the <code>localStorage</code> API to store metadata about the page\u2014title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from <code>localStorage</code>.</p>\n\n<p>It all worked fine, but as soon as I read <a href=\"https://remysharp.com/2019/09/05/offline-listings\">Remy\u2019s post</a> about the forehead-slappingly brilliant technique he\u2019s using, I knew I\u2019d be switching my code over. Instead of using <code>localStorage</code>\u2014or any other browser API\u2014to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you\u2019ve stored, and get at whatever information you need:</p>\n\n<blockquote>\n <p>I realised I didn\u2019t need to store anything. <strong>HTML is the API</strong>.</p>\n</blockquote>\n\n<p>Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency\u2014<code>localStorage</code>\u2014and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.</p>\n\n<p>Many years ago, Cory Doctorow wrote a piece called <a href=\"https://people.well.com/user/doctorow/metacrap.htm\">Metacrap</a>. In it, he enumerates the many issues with metadata\u2014data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it\u2019s invisible\u2014out of sight and out of mind.</p>\n\n<p>In fact, that\u2019s always been at the heart of one of the core principles behind <a href=\"https://indieweb.org/microformats\">microformats</a>. Instead of duplicating information\u2014once as data and again as metadata\u2014repurpose the <em>visible</em> data; mark it up so its meta-information is directly attached to the information itself.</p>\n\n<p>So if you have a person\u2019s contact details on a web page, rather than repeating that information somewhere else\u2014in the <code>head</code> of the document, say\u2014you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that\u2019s done with <code>class</code> attributes. You can mark up a page that already has your contact information with classes from <a href=\"https://indieweb.org/h-card\">the h-card microformat</a>.</p>\n\n<p>Here on my website, I\u2019ve marked up my blog posts, articles, and links using <a href=\"https://indieweb.org/h-entry\">the h-entry microformat</a>. These classes explicitly mark up the content to say \u201c<em>this</em> is the title\u201d, \u201c<em>this</em> is the content\u201d, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else\u2019s website, and ping them with a <a href=\"https://indieweb.org/webmention\">webmention</a>, they can retrieve my post and know which bit is the title, which bit is the content, and so on.</p>\n\n<p>When I read Remy\u2019s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn\u2019t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.</p>\n\n<p>The markup for my offline page looks like this:</p>\n\n<pre><code><h1>Offline</h1>\n<p>Sorry.\u00a0It\u00a0looks\u00a0like\u00a0the\u00a0network\u00a0connection\u00a0isn\u2019t\u00a0working\u00a0right\u00a0now.</p>\n<div\u00a0id=\"history\">\n</div>\n</code></pre>\n\n<p>I\u2019ll populate that \u201chistory\u201d <code>div</code> with information from a cache called \u201cpages\u201d that I\u2019ve created using the Cache API in my service worker.</p>\n\n<p>I\u2019m going to use <code>async</code>/<code>await</code> to do this because there are lots of steps that rely on the completion of the step before. \u201cOpen this cache, <em>then</em> get the keys of that cache, <em>then</em> loop through the pages, <em>then</em>\u2026\u201d All of those <code>then</code>s would lead to some serious indentation without <code>async</code>/<code>await</code>.</p>\n\n<p>All <code>async</code> functions have to have a name\u2014no anonymous <code>async</code> functions allowed. I\u2019m calling this one <code>listPages</code>, just like Remy is doing. I\u2019m making the <code>listPages</code> function execute immediately:</p>\n\n<p><code>\n(async\u00a0function\u00a0listPages()\u00a0{\n...\n})();\n</code></p>\n\n<p>Now for the code to go inside that immediately-invoked function.</p>\n\n<p>I create an array called <code>browsingHistory</code> that I\u2019ll populate with the data I\u2019ll use for that \u201chistory\u201d <code>div</code>.</p>\n\n<p><code>\nconst browsingHistory = [];\n</code></p>\n\n<p>I\u2019m going to be parsing web pages later on, so I\u2019m going to need a DOM parser. I give it the imaginative name of \u2026<code>parser</code>.</p>\n\n<p><code>\nconst parser = new DOMParser();\n</code></p>\n\n<p>Time to open up my \u201cpages\u201d cache. This is the first <code>await</code> statement. When the cache is opened, this promise will resolve and I\u2019ll have access to this cache using the variable \u2026<code>cache</code> (again with the imaginative naming).</p>\n\n<p><code>\nconst cache = await caches.open('pages');\n</code></p>\n\n<p>Now I get the keys of the cache\u2014that\u2019s a list of all the page requests in there. This is the second <code>await</code>. Once the keys have been retrieved, I\u2019ll have a variable that\u2019s got a list of all those pages. You\u2019ll never guess what I\u2019m calling the variable that stores the keys of the cache. That\u2019s right \u2026<code>keys</code>!</p>\n\n<p><code>\nconst keys = await cache.keys();\n</code></p>\n\n<p>Time to get looping. I\u2019m getting each request in the list of keys using a <code>for</code>/<code>of</code> loop:</p>\n\n<p><code>\nfor (const request of keys) {\n..\n}\n</code></p>\n\n<p>Inside the loop, I pull the page out of the cache using the <code>match()</code> method of the Cache API. I\u2019ll store what I get back in a variable called <code>response</code>. As with everything involving the Cache API, this is asynchronous so I need to use the <code>await</code> keyword here.</p>\n\n<p><code>\nconst response = await cache.match(request);\n</code></p>\n\n<p>I\u2019m not interested in the headers of the response. I\u2019m specifically looking for the HTML itself. I can get at that using the <code>text()</code> method. Again, it\u2019s asynchronous and I want this promise to resolve before doing anything else, so I use the <code>await</code> keyword. When the promise resolves, I\u2019ll have a variable called <code>html</code> that contains the body of the response.</p>\n\n<p><code>\nconst html = await response.text();\n</code></p>\n\n<p>Now I can use that DOM parser I created earlier. I\u2019ve got a string of text in the <code>html</code> variable. I can generate a Document Object Model from that string using the <code>parseFromString()</code> method. This isn\u2019t asynchronous so there\u2019s no need for the <code>await</code> keyword.</p>\n\n<p><code>\nconst dom = parser.parseFromString(html, 'text/html');\n</code></p>\n\n<p>Now I\u2019ve got a DOM, which I have creatively stored in a variable called \u2026<code>dom</code>.</p>\n\n<p>I can poke at it using DOM methods like <code>querySelector</code>. I can test to see if this particular page has an h-entry on it by looking for an element with a <code>class</code> attribute containing the value \u201ch-entry\u201d:</p>\n\n<p><code>\nif (dom.querySelector('.h-entry h1.p-name') {\n...\n}\n</code></p>\n\n<p>In this particular case, I\u2019m also checking to see if the <code>h1</code> element of the page is the title of the h-entry. That\u2019s so that index pages (like my home page) won\u2019t get past this <code>if</code> statement.</p>\n\n<p>Inside the <code>if</code> statement, I\u2019m going to store the data I retrieve from the DOM. I\u2019ll save the data into an object called \u2026<code>data</code>!</p>\n\n<p><code>\nconst data = new Object;\n</code></p>\n\n<p>Well, the first piece of data isn\u2019t actually in the markup: it\u2019s the URL of the page. I can get that from the <code>request</code> variable in my <code>for</code> loop.</p>\n\n<p><code>\ndata.url = request.url;\n</code></p>\n\n<p>I\u2019m going to store the timestamp for this h-entry. I can get that from the <code>datetime</code> attribute of the <code>time</code> element marked up with a class of <code>dt-published</code>.</p>\n\n<p><code>\ndata.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));\n</code></p>\n\n<p>While I\u2019m at it, I\u2019m going to grab the human-readable date from the <code>innerText</code> property of that same <code>time.dt-published</code> element.</p>\n\n<p><code>\ndata.published = dom.querySelector('.h-entry .dt-published').innerText;\n</code></p>\n\n<p>The title of the h-entry is in the <code>innerText</code> of the element with a class of <code>p-name</code>.</p>\n\n<p><code>\ndata.title = dom.querySelector('.h-entry .p-name').innerText;\n</code></p>\n\n<p>At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don\u2019t output a description of the post anywhere in the <code>body</code> of the page, but I do put it in the <code>head</code> in a <code>meta</code> element. I\u2019ll grab that now.</p>\n\n<p><code>\ndata.description = dom.querySelector('meta[name=\"description\"]').getAttribute('content');\n</code></p>\n\n<p>Alright. I\u2019ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I\u2019ll stick all of that data into my <code>browsingHistory</code> array.</p>\n\n<p><code>\nbrowsingHistory.push(data);\n</code></p>\n\n<p>My <code>if</code> statement and my <code>for</code>/<code>in</code> loop are finished at this point. Here\u2019s how the whole loop looks:</p>\n\n<pre><code>for (const request of keys) {\n const response = await cache.match(request);\n const html = await response.text();\n const dom = parser.parseFromString(html, 'text/html');\n if (dom.querySelector('.h-entry h1.p-name')) {\n const data = new Object;\n data.url = request.url;\n data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));\n data.published = dom.querySelector('.h-entry .dt-published').innerText;\n data.title = dom.querySelector('.h-entry .p-name').innerText;\n data.description = dom.querySelector('meta[name=\"description\"]').getAttribute('content');\n browsingHistory.push(data);\n }\n}\n</code></pre>\n\n<p>That\u2019s the data collection part of the code. Now I\u2019m going to take all that yummy information an output it onto the page.</p>\n\n<p>First of all, I want to make sure that the <code>browsingHistory</code> array isn\u2019t empty. There\u2019s no point going any further if it is.</p>\n\n<p><code>\nif (browsingHistory) {\n...\n}\n</code></p>\n\n<p>Within this <code>if</code> statement, I can do what I want with the data I\u2019ve put into the <code>browsingHistory</code> array.</p>\n\n<p>I\u2019m going to arrange the data by date published. I\u2019m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here\u2019s how I sort the <code>browsingHistory</code> array according to the <code>timestamp</code> property of each item within it:</p>\n\n<p><code>\nbrowsingHistory.sort(\u00a0(a,b)\u00a0=>\u00a0{\n return\u00a0b.timestamp\u00a0-\u00a0a.timestamp;\n});\n</code></p>\n\n<p>Now I\u2019m going to concatenate some strings. This is the string of HTML text that will eventually be put into the \u201chistory\u201d <code>div</code>. I\u2019m storing the markup in a string called \u2026<code>markup</code> (my imagination knows no bounds).</p>\n\n<p><code>\nlet\u00a0markup\u00a0=\u00a0'<p>But\u00a0you\u00a0still\u00a0have\u00a0something\u00a0to\u00a0read:</p>';\n</code></p>\n\n<p>I\u2019m going to add a chunk of markup for each item of data.</p>\n\n<pre><code>browsingHistory.forEach(\u00a0data\u00a0=>\u00a0{\n markup\u00a0+=\u00a0`\n<h2><a\u00a0href=\"${\u00a0data.url\u00a0}\">${\u00a0data.title\u00a0}</a></h2>\n<p>${\u00a0data.description\u00a0}</p>\n<p class=\"meta\">${\u00a0data.published\u00a0}</p>\n`;\n});\n</code></pre>\n\n<p>With my markup assembled, I can now insert it into the \u201chistory\u201d part of my offline page. I\u2019m using the handy <code>insertAdjacentHTML()</code> method to do this.</p>\n\n<p><code>\ndocument.getElementById('history').insertAdjacentHTML('beforeend',\u00a0markup);\n</code></p>\n\n<p>Here\u2019s what my finished JavaScript looks like:</p>\n\n<pre><code><script>\n(async function listPages() {\n const browsingHistory = [];\n const parser = new DOMParser();\n const cache = await caches.open('pages');\n const keys = await cache.keys();\n for (const request of keys) {\n const response = await cache.match(request);\n const html = await response.text();\n const dom = parser.parseFromString(html, 'text/html');\n if (dom.querySelector('.h-entry h1.p-name')) {\n const data = new Object;\n data.url = request.url;\n data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));\n data.published = dom.querySelector('.h-entry .dt-published').innerText;\n data.title = dom.querySelector('.h-entry .p-name').innerText;\n data.description = dom.querySelector('meta[name=\"description\"]').getAttribute('content');\n browsingHistory.push(data);\n }\n }\n if (browsingHistory) {\n browsingHistory.sort( (a,b) => {\n return b.timestamp - a.timestamp;\n });\n let markup = '<p>But you still have something to read:</p>';\n browsingHistory.forEach( data => {\n markup += `\n<h2><a href=\"${ data.url }\">${ data.title }</a></h2>\n<p>${ data.description }</p>\n<p class=\"meta\">${ data.published }</p>\n`;\n });\n document.getElementById('history').insertAdjacentHTML('beforeend', markup);\n }\n})();\n</script>\n</code></pre>\n\n<p>I\u2019m pretty happy with that. It\u2019s not too long but it\u2019s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.</p>\n\n<p>If you\u2019ve got an offline strategy for your website, and you\u2019re using h-entry to mark up your content, feel free to use that code.</p>\n\n<p>If you don\u2019t have an offline strategy for your website, <a href=\"https://abookapart.com/products/going-offline\">there\u2019s a book for that</a>.</p>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://aperture-proxy.p3k.io/bbbacdf0a064621004f2ce9026a1202a5f3433e0/68747470733a2f2f6164616374696f2e636f6d2f696d616765732f70686f746f2d3135302e6a7067"
},
"post-type": "article",
"_id": "5353813",
"_source": "2",
"_is_read": true
}
Why social networks are even called networks? They don't do networking, it's just one site! #IndieWeb is the true social network...
{
"type": "entry",
"published": "2019-09-21T12:11:00+0300",
"url": "https://fireburn.ru/posts/1569057060",
"category": [
"IndieWeb",
"silos"
],
"content": {
"text": "Why social networks are even called networks? They don't do networking, it's just one site! #IndieWeb is the true social network..."
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "note",
"_id": "5350648",
"_source": "1371",
"_is_read": true
}
{
"type": "entry",
"author": {
"name": "Manton Reece",
"url": "https://www.manton.org/",
"photo": "https://aperture-proxy.p3k.io/907926e361383204bd1bc913c143c23e70ae69bb/68747470733a2f2f6d6963726f2e626c6f672f6d616e746f6e2f6176617461722e6a7067"
},
"url": "https://www.manton.org/2019/09/20/wordpress-funding-and.html",
"name": "WordPress funding and market dominance",
"content": {
"html": "<p><a href=\"https://ma.tt/2019/09/series-d/\">Matt Mullenweg blogged</a> that Automattic has received a Series D funding round of $300 million. He had some interesting comments <a href=\"https://techcrunch.com/2019/09/19/automattic-ceo-matt-mullenweg-about-raising-300-million-and-the-open-web/\">in an interview with TechCrunch</a> about how much they want to grow WordPress, comparing it to Android\u2019s 85% market share and even going beyond that:</p>\n\n<blockquote>\n<p>What we want to do is to become the operating system for the open web. We want every website, whether it\u2019s e-commerce or anything to be powered by WordPress. And by doing so, we\u2019ll make sure that the web can go back to being more open, more integrated and more user-centric than it would be if proprietary platforms become dominant.</p>\n</blockquote>\n\n<p>I\u2019ve long been inspired by Automattic. They were the best company to acquire Tumblr and they seem well-positioned to make a dent in the dominance of Facebook and Twitter. But also I\u2019m thinking about one of the <a href=\"https://indieweb.org/principles\">IndieWeb\u2019s principles</a>:</p>\n\n<blockquote>\n<p>Plurality. With IndieWebCamp we\u2019ve specifically chosen to encourage and embrace a diversity of approaches & implementations. This background makes the IndieWeb stronger and more resilient than any one (often <a href=\"https://indieweb.org/monoculture\">monoculture</a>) approach.</p>\n</blockquote>\n\n<p>WordPress is at 34% of web sites right now, and I can easily see it getting to 50%. Growing bigger than that might take away one of the beautiful things about the web: the diversity and flexibility to move between platforms. I\u2019m rooting for Automattic to take market share away from the big social networks, but there should be a variety of tools available to build web sites, including platforms like Micro.blog.</p>",
"text": "Matt Mullenweg blogged that Automattic has received a Series D funding round of $300 million. He had some interesting comments in an interview with TechCrunch about how much they want to grow WordPress, comparing it to Android\u2019s 85% market share and even going beyond that:\n\n\nWhat we want to do is to become the operating system for the open web. We want every website, whether it\u2019s e-commerce or anything to be powered by WordPress. And by doing so, we\u2019ll make sure that the web can go back to being more open, more integrated and more user-centric than it would be if proprietary platforms become dominant.\n\n\nI\u2019ve long been inspired by Automattic. They were the best company to acquire Tumblr and they seem well-positioned to make a dent in the dominance of Facebook and Twitter. But also I\u2019m thinking about one of the IndieWeb\u2019s principles:\n\n\nPlurality. With IndieWebCamp we\u2019ve specifically chosen to encourage and embrace a diversity of approaches & implementations. This background makes the IndieWeb stronger and more resilient than any one (often monoculture) approach.\n\n\nWordPress is at 34% of web sites right now, and I can easily see it getting to 50%. Growing bigger than that might take away one of the beautiful things about the web: the diversity and flexibility to move between platforms. I\u2019m rooting for Automattic to take market share away from the big social networks, but there should be a variety of tools available to build web sites, including platforms like Micro.blog."
},
"published": "2019-09-20T14:04:48-05:00",
"category": [
"Essays"
],
"post-type": "article",
"_id": "5343383",
"_source": "12",
"_is_read": true
}
It’s Homebrew Website Club Brighton this evening in the @Clearleft HQ at 6pm:
https://indieweb.org/events/2019-09-19-homebrew-website-club
Come and work on your website (or get some writing done).
{
"type": "entry",
"published": "2019-09-19T10:39:14Z",
"url": "https://adactio.com/notes/15834",
"syndication": [
"https://twitter.com/adactio/status/1174634046565097473"
],
"content": {
"text": "It\u2019s Homebrew Website Club Brighton this evening in the @Clearleft HQ at 6pm:\n\nhttps://indieweb.org/events/2019-09-19-homebrew-website-club\n\nCome and work on your website (or get some writing done).",
"html": "<p>It\u2019s Homebrew Website Club Brighton this evening in the <a href=\"https://twitter.com/Clearleft\">@Clearleft</a> HQ at 6pm:</p>\n\n<p><a href=\"https://indieweb.org/events/2019-09-19-homebrew-website-club\">https://indieweb.org/events/2019-09-19-homebrew-website-club</a></p>\n\n<p>Come and work on your website (or get some writing done).</p>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://aperture-proxy.p3k.io/bbbacdf0a064621004f2ce9026a1202a5f3433e0/68747470733a2f2f6164616374696f2e636f6d2f696d616765732f70686f746f2d3135302e6a7067"
},
"post-type": "note",
"_id": "5319754",
"_source": "2",
"_is_read": true
}
{
"type": "entry",
"published": "2019-09-19T07:41:00+0100",
"url": "https://www.jvt.me/mf2/f60d23c8-82d3-4d3a-baaf-dced33a027e9/",
"category": [
"personal-website",
"indieweb"
],
"bookmark-of": [
"https://www.vanschneider.com/a-love-letter-to-personal-websites"
],
"name": "A love letter to my website",
"author": {
"type": "card",
"name": "Jamie Tanna",
"url": "https://www.jvt.me",
"photo": "https://aperture-proxy.p3k.io/f4cac242182744deb91a5ee91d7528d78e657269/68747470733a2f2f7777772e6a76742e6d652f696d672f70726f66696c652e706e67"
},
"post-type": "bookmark",
"_id": "5318110",
"_source": "2169",
"_is_read": true
}
People are developing "addiction" to social media silos. By that logic, I seem to be addicted to the #IndieWeb.
{
"type": "entry",
"published": "2019-09-18T21:28:41+00:00",
"url": "https://fireburn.ru/posts/1568831321",
"category": [
"IndieWeb",
"silos"
],
"content": {
"text": "People are developing \"addiction\" to social media silos. By that logic, I seem to be addicted to the #IndieWeb.",
"html": "<p>People are developing \"addiction\" to social media silos. By that logic, I seem to be addicted to the #IndieWeb.</p>"
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "note",
"_id": "5313273",
"_source": "1371",
"_is_read": true
}
It's a webcomic called The Gamer, and the RSS link is here: https://www.webtoons.com/en/fantasy/the-gamer/rss?title_no=88
It's about a guy who accidentally got magic abilities and turned into a video game character, and the world around him became similar to a videogame - people started having stat blocks, special abilities and other stuff. It's originally Korean, but is translated to English and some other languages.
P.S. While writing this, I discovered my Micropub client doesn't have a field to set the in-reply-to
property. I edited the JSON manually to include it. I hope you get a webmention!
{
"type": "entry",
"published": "2019-09-18T19:57:37+00:00",
"url": "https://fireburn.ru/posts/1568825857",
"in-reply-to": [
"https://realize.be/reply/content/1895"
],
"content": {
"text": "It's a webcomic called The Gamer, and the RSS link is here: https://www.webtoons.com/en/fantasy/the-gamer/rss?title_no=88\nIt's about a guy who accidentally got magic abilities and turned into a video game character, and the world around him became similar to a videogame - people started having stat blocks, special abilities and other stuff. It's originally Korean, but is translated to English and some other languages.\nP.S. While writing this, I discovered my Micropub client doesn't have a field to set the in-reply-to property. I edited the JSON manually to include it. I hope you get a webmention!",
"html": "<p>It's a webcomic called The Gamer, and the RSS link is here: <a href=\"https://www.webtoons.com/en/fantasy/the-gamer/rss?title_no=88\">https://www.webtoons.com/en/fantasy/the-gamer/rss?title_no=88</a></p>\n<p>It's about a guy who accidentally got magic abilities and turned into a video game character, and the world around him became similar to a videogame - people started having stat blocks, special abilities and other stuff. It's originally Korean, but is translated to English and some other languages.</p>\n<p><b>P.S.</b> While writing this, I discovered my Micropub client doesn't have a field to set the <code>in-reply-to</code> property. I edited the JSON manually to include it. I hope you get a webmention!</p>"
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "reply",
"_id": "5312562",
"_source": "1371",
"_is_read": true
}
{
"type": "entry",
"published": "2019-09-18T17:46:49+02:00",
"url": "https://notiz.blog/2019/09/18/eine-posse/",
"name": "Eine POSSE!",
"content": {
"text": "Publish (on your) Own Site, Syndicate Elsewhere kurz POSSE ist ein zentraler Building Block des IndieWeb.\n\n\n\n\nPOSSE is an abbreviation for Publish (on your) Own Site, Syndicate Elsewhere, a content publishing model that starts with posting content on your own domain first, then syndicating out copies to 3rd party services with permashortlinks back to the original on your site.\nhttps://indieweb.org/POSSE\n\n\n\n\nDie Idee: Alles (Texte, Bilder, Podcasts, Videos, \u2026) zuerst auf der eigenen Seite ver\u00f6ffentlichen und dann \u201eKopien\u201c \u00fcber die sozielen Netzwerke teilen.\n\n\n\nVor ungef\u00e4r 6 Jahren schrieb der hackr folgendes \u00fcber POSSE:\n\n\n\n\ndas indieweb differenziert nicht in \u201atext\u2018 der tats\u00e4chlich indiewertig ist und text, der im driveby entsteht und jeweils einer ganz konkreten logik entspricht. (das sozial dysfunktionale verhalten w\u00e4re, dass syndizierer auf den jeweiligen plattformen eher als spammer / bzw. eben genau als l\u00e4stige syndizierer wahrgenommen werden, die die jeweilige plattform weder verstehen, noch die spezifit\u00e4t ber\u00fccksichtigen, noch sich darum k\u00fcmmern und nur gwm \u201amelken\u2018 wollen)\nhackr\n\n\n\n\nZusammengefasst: Die syndizierten Posts werden durch POSSE aus dem Kontext gerissen und k\u00f6nn(t)en dadurch in den entsprechenden sozialen Netzwerken nicht richtig eingeordnet werden.\n\n\n\nDamals habe ich noch stark dagegen argumentiert:\n\n\n\n\nEs geht eben nicht darum ein(en) Text/Bild/Video in so viele Netzwerke wie m\u00f6glich zu streuen, sondern genau umgekehrt\u2026 Man schreibt den Text den man beispielsweise sonst explizit auf Twitter geschrieben h\u00e4tte eben nicht auf Twitter sondern auf seiner eigenen Seite und pushed ihn danach in das Netzwerk um die Kontrolle \u00fcber seinen und eine Kopie von seinem Text zu behalten.\nich\n\n\n\n\nEs geht eben nicht um das syndizieren an sich, sondern um das \u201etwittern/facebooken/\u2026 \u00fcber die eigene Seite\u201c.\n\n\n\n\nguter punkt, nur stelle ich gwm. das vorhandensein eines tweets ausserhalb von twitter selbst in frage.\nhackr\n\n\n\n\nDurch Zufall hab ich mich vor ein paar Wochen an die Diskussion erinnert\u2026\n\n\n\nEs fehlt letztendlich nicht der Kontext auf Twitter & Co. sondern auf der \u201eeigenen\u201c Webseite. Immer mehr Blogger in meinem direkten Umfeld POSSEen, aber nur die Wenigsten trennen diese Posts von ihren klassischen Artikeln. Das hei\u00dft in meinem Feed-Reader tauchen immer mehr zusammenhanglose Kurznachrichten, teilweise direkte Antworten auf tweets oder sogar Issues f\u00fcr GitHub Projekte auf.\n\n\n\nIm Prinzip ist es egal, wie man es dreht\u2026 durch das syndizieren geht der Kontext verloren und der hackr hatte damals doch recht \ud83d\ude09\n\n\n\nZusammen mit dem \u201emicrobloggen \u00fcber die eigene Webseite\u201c wird POSSE zu einem echten Problem in meinem Feed-Reader \ud83d\ude41",
"html": "<p><em>Publish (on your) Own Site, Syndicate Elsewhere</em> kurz <em>POSSE</em> ist ein zentraler <em>Building Block</em> des IndieWeb.</p>\n\n\n\n<blockquote>\n<p><strong>POSSE</strong> is an abbreviation for <strong>Publish (on your) Own Site, Syndicate Elsewhere</strong>, a content publishing model that starts with posting content on your own domain first, then syndicating out copies to 3rd party services with <a href=\"https://indieweb.org/permashortlinks\">permashortlinks</a> back to the original on your site.</p>\n<a href=\"https://indieweb.org/POSSE\"></a><a href=\"https://indieweb.org/POSSE\">https://indieweb.org/POSSE</a>\n</blockquote>\n\n\n\n<p>Die Idee: Alles (Texte, Bilder, Podcasts, Videos, \u2026) zuerst auf der eigenen Seite ver\u00f6ffentlichen und dann \u201eKopien\u201c \u00fcber die sozielen Netzwerke teilen.</p>\n\n\n\n<p>Vor ungef\u00e4r 6 Jahren schrieb der hackr folgendes \u00fcber POSSE:</p>\n\n\n\n<blockquote>\n<p>das indieweb differenziert nicht in \u201atext\u2018 der tats\u00e4chlich indiewertig ist und text, der im driveby entsteht und jeweils einer ganz konkreten logik entspricht. (das sozial dysfunktionale verhalten w\u00e4re, dass syndizierer auf den jeweiligen plattformen eher als spammer / bzw. eben genau als l\u00e4stige syndizierer wahrgenommen werden, die die jeweilige plattform weder verstehen, noch die spezifit\u00e4t ber\u00fccksichtigen, noch sich darum k\u00fcmmern und nur gwm \u201amelken\u2018 wollen)</p>\n<a href=\"http://hackr.de/2014/01/20/quiz-pt-88-the-bad-good-idea-edition#comment-1208986056\">hackr</a>\n</blockquote>\n\n\n\n<p>Zusammengefasst: Die syndizierten Posts werden durch POSSE aus dem Kontext gerissen und k\u00f6nn(t)en dadurch in den entsprechenden sozialen Netzwerken nicht richtig eingeordnet werden.</p>\n\n\n\n<p>Damals habe ich noch stark dagegen argumentiert:</p>\n\n\n\n<blockquote>\n<p>Es geht eben nicht darum ein(en) Text/Bild/Video in so viele Netzwerke wie m\u00f6glich zu streuen, sondern genau umgekehrt\u2026 Man schreibt den Text den man beispielsweise sonst explizit auf Twitter geschrieben h\u00e4tte eben nicht auf Twitter sondern auf seiner eigenen Seite und pushed ihn danach in das Netzwerk um die Kontrolle \u00fcber seinen und eine Kopie von seinem Text zu behalten.</p>\n<a href=\"http://hackr.de/2014/01/20/quiz-pt-88-the-bad-good-idea-edition#comment-1209016428\">ich</a>\n</blockquote>\n\n\n\n<p>Es geht eben nicht um das syndizieren an sich, sondern um das \u201etwittern/facebooken/\u2026 \u00fcber die eigene Seite\u201c.</p>\n\n\n\n<blockquote>\n<p>guter punkt, nur stelle ich gwm. das vorhandensein eines tweets ausserhalb von twitter selbst in frage.</p>\n<a href=\"http://hackr.de/2014/01/20/quiz-pt-88-the-bad-good-idea-edition#comment-1209144288\">hackr</a>\n</blockquote>\n\n\n\n<p>Durch Zufall hab ich mich vor ein paar Wochen an die Diskussion erinnert\u2026</p>\n\n\n\n<p>Es fehlt letztendlich nicht der Kontext auf Twitter & Co. sondern auf der \u201eeigenen\u201c Webseite. Immer mehr Blogger in meinem direkten Umfeld<em> POSSEen</em>, aber nur die Wenigsten trennen diese Posts von ihren klassischen Artikeln. Das hei\u00dft in meinem Feed-Reader tauchen immer mehr zusammenhanglose Kurznachrichten, teilweise direkte Antworten auf <em><a href=\"http://tantek.com/2019/163/f7\">tweets</a></em> oder sogar <em><a href=\"https://dougbeal.com/2018/10/20/https-githubcom-indieweb-indiewebify-me-issues/\">Issues</a></em> f\u00fcr GitHub Projekte auf.</p>\n\n\n\n<p>Im Prinzip ist es egal, wie man es dreht\u2026 durch das syndizieren geht der Kontext verloren und <strong>der hackr hatte damals doch recht</strong> \ud83d\ude09</p>\n\n\n\n<p>Zusammen mit dem \u201e<a href=\"https://notiz.blog/2019/02/21/untitled/\">microbloggen \u00fcber die eigene Webseite</a>\u201c wird POSSE zu einem echten Problem in meinem Feed-Reader \ud83d\ude41</p>"
},
"author": {
"type": "card",
"name": "Matthias Pfefferle",
"url": "https://notiz.blog/author/matthias-pfefferle/",
"photo": "https://secure.gravatar.com/avatar/75512bb584bbceae57dfc503692b16b2?s=40&d=https://notiz.blog/wp-content/plugins/semantic-linkbacks/img/mm.jpg&r=g"
},
"post-type": "article",
"_id": "5309960",
"_source": "206",
"_is_read": true
}
I think at https://www.jvt.me/events/homebrew-website-club-nottingham/2019/09/18/ tonight I'm going to write a how-to for setting up your first h-card, similar to https://www.jvt.me/posts/2019/08/21/rsvp-from-your-website/
{
"type": "entry",
"published": "2019-09-18T16:14:00+0100",
"url": "https://www.jvt.me/mf2/b17d3509-62dc-40fc-8bf6-a6d04fba4903/",
"category": [
"indieweb",
"microformats"
],
"content": {
"text": "I think at https://www.jvt.me/events/homebrew-website-club-nottingham/2019/09/18/ tonight I'm going to write a how-to for setting up your first h-card, similar to https://www.jvt.me/posts/2019/08/21/rsvp-from-your-website/"
},
"author": {
"type": "card",
"name": "Jamie Tanna",
"url": "https://www.jvt.me",
"photo": "https://aperture-proxy.p3k.io/f4cac242182744deb91a5ee91d7528d78e657269/68747470733a2f2f7777772e6a76742e6d652f696d672f70726f66696c652e706e67"
},
"post-type": "note",
"_id": "5309222",
"_source": "2169",
"_is_read": true
}
We choose whether our work stays alive on the internet. As long as we keep our hosting active, our site remains online. Compare that to social media platforms that go public one day and bankrupt the next, shutting down their app and your content along with it.
Your content is yours.
But the real truth is that as long as we’re putting our work in someone else’s hands, we forfeit our ownership over it. When we create our own website, we own it – at least to the extent that the internet, beautiful in its amorphous existence, can be owned.
{
"type": "entry",
"published": "2019-09-18T14:16:48Z",
"url": "https://adactio.com/links/15825",
"category": [
"indieweb",
"personal",
"publishing",
"homepages",
"websites",
"independent",
"ownership"
],
"bookmark-of": [
"https://www.vanschneider.com/a-love-letter-to-personal-websites"
],
"content": {
"text": "A love letter to my website - DESK Magazine\n\n\n\n\n We choose whether our work stays alive on the internet. As long as we keep our hosting active, our site remains online. Compare that to social media platforms that go public one day and bankrupt the next, shutting down their app and your content along with it.\n\n\nYour content is yours.\n\n\n But the real truth is that as long as we\u2019re putting our work in someone else\u2019s hands, we forfeit our ownership over it. When we create our own website, we own it \u2013 at least to the extent that the internet, beautiful in its amorphous existence, can be owned.",
"html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://www.vanschneider.com/a-love-letter-to-personal-websites\">\nA love letter to my website - DESK Magazine\n</a>\n</h3>\n\n<blockquote>\n <p>We choose whether our work stays alive on the internet. As long as we keep our hosting active, our site remains online. Compare that to social media platforms that go public one day and bankrupt the next, shutting down their app and your content along with it.</p>\n</blockquote>\n\n<p><a href=\"https://indieweb.org/\">Your content is yours</a>.</p>\n\n<blockquote>\n <p>But the real truth is that as long as we\u2019re putting our work in someone else\u2019s hands, we forfeit our ownership over it. When we create our own website, we own it \u2013 at least to the extent that the internet, beautiful in its amorphous existence, can be owned.</p>\n</blockquote>"
},
"author": {
"type": "card",
"name": "Jeremy Keith",
"url": "https://adactio.com/",
"photo": "https://aperture-proxy.p3k.io/bbbacdf0a064621004f2ce9026a1202a5f3433e0/68747470733a2f2f6164616374696f2e636f6d2f696d616765732f70686f746f2d3135302e6a7067"
},
"post-type": "bookmark",
"_id": "5308462",
"_source": "2",
"_is_read": true
}
I love my blog. It's like Twitter, but has more warm, fuzzy vibes. And my feed isn't toxic! #IndieWeb 😍
{
"type": "entry",
"published": "2019-09-18T11:41:14+00:00",
"url": "https://fireburn.ru/posts/1568796074",
"category": [
"IndieWeb",
"syndication",
"ownyourdata"
],
"content": {
"text": "I love my blog. It's like Twitter, but has more warm, fuzzy vibes. And my feed isn't toxic! #IndieWeb \ud83d\ude0d",
"html": "<p>I love my blog. It's like Twitter, but has more warm, fuzzy vibes. And my feed isn't toxic! #IndieWeb \ud83d\ude0d</p>"
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "note",
"_id": "5305407",
"_source": "1371",
"_is_read": true
}
I seriously need to make syndication work on my blog, because nobody reads me!
{
"type": "entry",
"published": "2019-09-18T11:36:41+00:00",
"url": "https://fireburn.ru/posts/1568795801",
"category": [
"IndieWeb",
"syndication",
"ownyourdata"
],
"content": {
"text": "I seriously need to make syndication work on my blog, because nobody reads me!",
"html": "<p>I seriously need to make syndication work on my blog, because nobody reads me!</p>"
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "note",
"_id": "5305408",
"_source": "1371",
"_is_read": true
}
Now I understand why @aaronpk has so much channels in his Monocle screenshots. I'm having 8 channels right now, they're:
- Home (my IndieWeb feed)
- Friends on Silos (mostly Twitter 'cause Instagram via Granary is unstable)
- Comics (XKCD and Naver Webtoons - the last one is crashing @swentel's Indigenous)
- News (Meduza, my favorite Russian electronic newspaper)
- Podcasts (currently only myurlis.com)
- YouTube (some of my YouTube subscriptions - I migrated my pop-science channels there)
- Self (a view on my own posts, very useful for debugging!)
Channel Icons
I use emojis as icons for channel feeds to be more colorful. I like colorfulness of social media silos and I don't want my IndieWeb feeds to be less attractive than silos. Seems like I'm not the only one, since I saw other people do the same thing.
So, what does your reader feed look like?
{
"type": "entry",
"published": "2019-09-18T11:33:06+00:00",
"url": "https://fireburn.ru/posts/1568795586",
"category": [
"Microsub",
"IndieWeb"
],
"content": {
"text": "Now I understand why @aaronpk has so much channels in his Monocle screenshots. I'm having 8 channels right now, they're:\nHome (my IndieWeb feed)\nFriends on Silos (mostly Twitter 'cause Instagram via Granary is unstable)\nComics (XKCD and Naver Webtoons - the last one is crashing @swentel's Indigenous)\nNews (Meduza, my favorite Russian electronic newspaper)\nPodcasts (currently only myurlis.com)\nYouTube (some of my YouTube subscriptions - I migrated my pop-science channels there)\nSelf (a view on my own posts, very useful for debugging!)\nChannel Icons\nI use emojis as icons for channel feeds to be more colorful. I like colorfulness of social media silos and I don't want my IndieWeb feeds to be less attractive than silos. Seems like I'm not the only one, since I saw other people do the same thing.\nSo, what does your reader feed look like?",
"html": "<p>Now I understand why <a href=\"https://aaronparecki.com\">@aaronpk</a> has so much channels in his Monocle screenshots. I'm having 8 channels right now, they're:</p>\n<ul><li>Home (my IndieWeb feed)</li>\n<li>Friends on Silos (mostly Twitter 'cause Instagram via Granary is unstable)</li>\n<li>Comics (XKCD and Naver Webtoons - the last one is crashing <a href=\"https://realize.be/\">@swentel</a>'s Indigenous)</li>\n<li>News (Meduza, my favorite Russian electronic newspaper)</li>\n<li>Podcasts (currently only <a href=\"https://myurlis.com/\">myurlis.com</a>)</li>\n<li>YouTube (some of my YouTube subscriptions - I migrated my pop-science channels there)</li>\n<li>Self (a view on my own posts, very useful for debugging!)</li>\n</ul><h2>Channel Icons</h2>\n<p>I use emojis as icons for channel feeds to be more colorful. I like colorfulness of social media silos and I don't want my IndieWeb feeds to be less attractive than silos. Seems like I'm not the only one, since I saw other people do the same thing.</p>\n<p><em>So, what does your reader feed look like?</em></p>"
},
"author": {
"type": "card",
"name": "Vika",
"url": "https://fireburn.ru/",
"photo": "https://aperture-proxy.p3k.io/53d3494aa1644e34c961228a4c1dd9a91d9ff775/68747470733a2f2f61766174617273312e67697468756275736572636f6e74656e742e636f6d2f752f373935333136333f733d34363026763d34"
},
"post-type": "note",
"_id": "5305410",
"_source": "1371",
"_is_read": true
}