{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/linkedin-public-settings-ignored/", "published": "2018-01-14T12:00:00+00:00", "content": { "html": "<p>A few days ago, on the #indieweb Freenode channel<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a> one of the users asked if we knew an indieweb-friendly way of getting data out of LinkedIn. I wasn\u2019t paying attention to any recent news related to LinkedIn, though I\u2019ve heard a few things, such as they are struggling to prevent data scraping: the note mentioned that they believe it\u2019s a problem that employers keep an eye on changes in LinkedIn profiles via 3rd party. This, indeed, can be an issue, but there are ways to manage this within LinkedIn: your public profile settings<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>.</p>\n<p>In my case, this was set to visible to everyone for years, and by the time I had to set it up (again: years), it was working as intended. But a few days ago, for my surprise, visiting my profile while logged out resulted in this:</p>\n<img src=\"https://aperture-proxy.p3k.io/1978efe54035ccfd218296557b7da601149aa18f/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6c696e6b6564696e2d7075626c69632d73657474696e67732d69676e6f7265642f6c696e6b6564696e2d7075626c69632d70726f66696c652d6973737565732d6175746877616c6c2e706e67\" title=\"linkedin-public-profile-issues-authwall\" alt=\"\" />\nLinkedIn showing a paywall-like \u2018authwall\u2019 for profiles set explicitly to public for everyone\n<p>and this:</p>\n<pre><code>$ wget -O- https://www.linkedin.com/in/petermolnareu\n--2018-01-14 10:26:12-- https://www.linkedin.com/in/petermolnareu\nResolving www.linkedin.com (www.linkedin.com)... 91.225.248.129, 2620:109:c00c:104::b93f:9001\nConnecting to www.linkedin.com (www.linkedin.com)|91.225.248.129|:443... connected.\nHTTP request sent, awaiting response... 999 Request denied\n2018-01-14 10:26:12 ERROR 999: Request denied.</code></pre>\n<p>or this:</p>\n<pre><code>$ curl https://www.linkedin.com/in/petermolnareu\n<html><head>\n<script type=\"text/javascript\">\nwindow.onload = function() {\n // Parse the tracking code from cookies.\n var trk = \"bf\";\n var trkInfo = \"bf\";\n var cookies = document.cookie.split(\"; \");\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"trkCode=\") == 0) && (cookies[i].length > 8)) {\n trk = cookies[i].substring(8);\n }\n else if ((cookies[i].indexOf(\"trkInfo=\") == 0) && (cookies[i].length > 8)) {\n trkInfo = cookies[i].substring(8);\n }\n }\n\n if (window.location.protocol == \"http:\") {\n // If \"sl\" cookie is set, redirect to https.\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"sl=\") == 0) && (cookies[i].length > 3)) {\n window.location.href = \"https:\" + window.location.href.substring(window.location.protocol.length);\n return;\n }\n }\n }\n\n // Get the new domain. For international domains such as\n // fr.linkedin.com, we convert it to www.linkedin.com\n var domain = \"www.linkedin.com\";\n if (domain != location.host) {\n var subdomainIndex = location.host.indexOf(\".linkedin\");\n if (subdomainIndex != -1) {\n domain = \"www\" + location.host.substring(subdomainIndex);\n }\n }\n\n window.location.href = \"https://\" + domain + \"/authwall?trk=\" + trk + \"&trkInfo=\" + trkInfo +\n \"&originalReferer=\" + document.referrer.substr(0, 200) +\n \"&sessionRedirect=\" + encodeURIComponent(window.location.href);\n}\n</script>\n</head></html></code></pre>\nSo I started digging. According to the LinkedIn FAQ<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a> there is a page where you can set your profile\u2019s public visibility. Those settings, for me, were still set to:\n<img src=\"https://aperture-proxy.p3k.io/7fc1cefd271d676ae70f9dbb4c79d45ace61788f/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6c696e6b6564696e2d7075626c69632d73657474696e67732d69676e6f7265642f6c696e6b6564696e2d7075626c69632d70726f66696c652d6973737565732d73657474696e67732e706e67\" title=\"linkedin-public-profile-issues-settings\" alt=\"\" />\nLinkedIn public profile settings\n<p>Despite the settings, there is no public profile for logged out users.</p>\n<p>I\u2019d like to understand what it going on, because so far, this looks like a fat lie from LinkedIn. Hopefully just a bug.</p>\n<h2>UPDATE</h2>\n<p><del>I tried setting referrers and user agents, used different IP addresses, still nothing.</del> I can\u2019t type today and managed to mistype <code>https://google.com</code> - the referrer ended up as <code>https:/google.com</code>. So, following the notes on HN, setting a referrer to Google sometimes works. After a few failures it will lock you out again, referrer or not. This is even uglier if it was a proper authwall for everyone.</p>\n<pre><code>curl 'https://www.linkedin.com/in/petermolnareu' \\\n-e 'https://google.com/' \\\n-H 'accept-encoding: text' -H \\\n'accept-language: en-US,en;q=0.9,' \\\n-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'</code></pre>\n<pre><code><!DOCTYPE html>...</code></pre>\n\n\n<ol><li><p><a href=\"https://chat.indieweb.org/\">https://chat.indieweb.org</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://www.linkedin.com/public-profile/settings\">https://www.linkedin.com/public-profile/settings</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"https://www.linkedin.com/help/linkedin/answer/83?query=public\">https://www.linkedin.com/help/linkedin/answer/83?query=public</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n</ol>", "text": "A few days ago, on the #indieweb Freenode channel1 one of the users asked if we knew an indieweb-friendly way of getting data out of LinkedIn. I wasn\u2019t paying attention to any recent news related to LinkedIn, though I\u2019ve heard a few things, such as they are struggling to prevent data scraping: the note mentioned that they believe it\u2019s a problem that employers keep an eye on changes in LinkedIn profiles via 3rd party. This, indeed, can be an issue, but there are ways to manage this within LinkedIn: your public profile settings2.\nIn my case, this was set to visible to everyone for years, and by the time I had to set it up (again: years), it was working as intended. But a few days ago, for my surprise, visiting my profile while logged out resulted in this:\n\nLinkedIn showing a paywall-like \u2018authwall\u2019 for profiles set explicitly to public for everyone\nand this:\n$ wget -O- https://www.linkedin.com/in/petermolnareu\n--2018-01-14 10:26:12-- https://www.linkedin.com/in/petermolnareu\nResolving www.linkedin.com (www.linkedin.com)... 91.225.248.129, 2620:109:c00c:104::b93f:9001\nConnecting to www.linkedin.com (www.linkedin.com)|91.225.248.129|:443... connected.\nHTTP request sent, awaiting response... 999 Request denied\n2018-01-14 10:26:12 ERROR 999: Request denied.\nor this:\n$ curl https://www.linkedin.com/in/petermolnareu\n<html><head>\n<script type=\"text/javascript\">\nwindow.onload = function() {\n // Parse the tracking code from cookies.\n var trk = \"bf\";\n var trkInfo = \"bf\";\n var cookies = document.cookie.split(\"; \");\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"trkCode=\") == 0) && (cookies[i].length > 8)) {\n trk = cookies[i].substring(8);\n }\n else if ((cookies[i].indexOf(\"trkInfo=\") == 0) && (cookies[i].length > 8)) {\n trkInfo = cookies[i].substring(8);\n }\n }\n\n if (window.location.protocol == \"http:\") {\n // If \"sl\" cookie is set, redirect to https.\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"sl=\") == 0) && (cookies[i].length > 3)) {\n window.location.href = \"https:\" + window.location.href.substring(window.location.protocol.length);\n return;\n }\n }\n }\n\n // Get the new domain. For international domains such as\n // fr.linkedin.com, we convert it to www.linkedin.com\n var domain = \"www.linkedin.com\";\n if (domain != location.host) {\n var subdomainIndex = location.host.indexOf(\".linkedin\");\n if (subdomainIndex != -1) {\n domain = \"www\" + location.host.substring(subdomainIndex);\n }\n }\n\n window.location.href = \"https://\" + domain + \"/authwall?trk=\" + trk + \"&trkInfo=\" + trkInfo +\n \"&originalReferer=\" + document.referrer.substr(0, 200) +\n \"&sessionRedirect=\" + encodeURIComponent(window.location.href);\n}\n</script>\n</head></html>\nSo I started digging. According to the LinkedIn FAQ3 there is a page where you can set your profile\u2019s public visibility. Those settings, for me, were still set to:\n\nLinkedIn public profile settings\nDespite the settings, there is no public profile for logged out users.\nI\u2019d like to understand what it going on, because so far, this looks like a fat lie from LinkedIn. Hopefully just a bug.\nUPDATE\nI tried setting referrers and user agents, used different IP addresses, still nothing. I can\u2019t type today and managed to mistype https://google.com - the referrer ended up as https:/google.com. So, following the notes on HN, setting a referrer to Google sometimes works. After a few failures it will lock you out again, referrer or not. This is even uglier if it was a proper authwall for everyone.\ncurl 'https://www.linkedin.com/in/petermolnareu' \\\n-e 'https://google.com/' \\\n-H 'accept-encoding: text' -H \\\n'accept-language: en-US,en;q=0.9,' \\\n-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'\n<!DOCTYPE html>...\n\n\nhttps://chat.indieweb.org\u21a9\nhttps://www.linkedin.com/public-profile/settings\u21a9\nhttps://www.linkedin.com/help/linkedin/answer/83?query=public\u21a9" }, "name": "LinkedIn is ignoring user settings", "post-type": "article", "_id": "1124853", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/internet-emotional-core/", "published": "2018-03-25T22:20:00+01:00", "content": { "html": "<p>There is a video out there, titled The Fall of The Simpsons: How it Happened<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a>. It starts by introducing a mediocre show that airs every night, called \u201cThe Simpsons\u201d, and compares it to a genius cartoon, that used to air in the early 90s, called \u201cThe Simpsons\u201d. <em>Watch the video, because it\u2019s good, and I\u2019m about to use it\u2019s conclusion</em>.</p>\n<p>It reckons that the tremendous difference is due to shrinking layers in jokes, and, more importantly, in the characters after season 7. I believe something similar happened online, which made the Internet become the internet.</p>\n<p>Many moons ago, while still living in London, the pedal of our flatmate\u2019s sewing machine broke down, and I started digging for replacement parts for her. I stumbled upon a detailed website about ancient capacitors<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>. It resembled other, gorgeous sources of knowledge: one of my all time favourite is leofoo\u2019s site on historical Nikon equipment<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a>. All decades old sites, containing specialist level knowledge on topics only used to be found in books in dusty corners of forgotten libraries.</p>\n<p>There\u2019s an interesting article about how chronological ordering destroyed the original way of curating content<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a> during the early online era, and I think the article got many things right. Try to imagine a slow web: slow connection, slow updates, slow everything. Take away social networks - no Twitter, no Facebook. Forget news aggregators: no more Hacker News or Reddit, not even Technorati. Grab your laptop and put in down on a desk, preferably in a corner - you\u2019re not allowed to move it. Use the HTML version of DuckDuckGo<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a> to search, and navigate with links from one site to another. That\u2019s how it was like; surfing on the <em>information highway</em>, and if you really want to experience it, UbuWeb<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> will allow you to do so.</p>\n<p>Most of the content was hand crafted, arranged to be readable, not searchable; it was human first, not machine first. Nearly everything online had a lot of effort put into it, even if the result was eye-blowing red text on blue background<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a>; somebody worked a lot on it. If you wanted it out there you learnt HTML, how to use FTP, how to link, how to format your page.</p>\n<p>We used to have homepages. Homes on the Internet. <em>Not profiles, no; profile is something the authorities make about you in dossier.</em></p>\n<p>6 years ago Anil Dash released a video, \u201cThe web we lost\u201d<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> and lamented the web 2.0 - <em>I despise this phrase; a horrible buzzword everyone used to label anything with; if you put \u2018cloud\u2019 and \u2018blockchain\u2019 together, you\u2019ll get the level of buzz that was \u2018web 2.0\u2019</em> -, that fall short to social media, but make no mistake: the Internet, the carefully laboured web 1.0, had already went underground when tools made it simple for anyone to publish with just a few clicks.</p>\n<p>The social web lost against social media, because it didn\u2019t (couldn\u2019t?) keep up with making things even simpler. Always on, always instant, always present. It served the purpose of a disposable web perfectly, where the most common goal is to seek fame, attention, to follow trends, to gain followers.</p>\n<p>There are people who never gave up, and are still tirelessly building tools, protocols, ideas, to lead people out of social media. The IndieWeb<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a>\u2019s goals are simple: own your data, have an online home, and connect with others through this. And so it\u2019s completely reasonable to hear:</p>\n<blockquote>\n<p>I want blogging to be as easy as tweeting.<a href=\"https://petermolnar.superfeedr.com/#fn10\">10</a></p>\n</blockquote>\n<p>But\u2026 what will this really achieve? This may sound rude and elitist, but the more I think about it the more I believe: the true way out of the swamp of social media is for things to require a little effort.</p>\n<p>To make people think about what they produce, to make them connect to their online content. It\u2019s like IKEA<a href=\"https://petermolnar.superfeedr.com/#fn11\">11</a>: once you put time, and a minor amount of sweat - or swearing - into it, it\u2019ll feel more yours, than something comfortably delivered.</p>\n<p>The Internet is still present, but it\u2019s shrinking. Content people really care about, customised looking homepages, carefully curated photo galleries are all diminishing. It would be fantastic to return to a world of personal websites, but that needs the love and work that used to be put into them, just like 20 years ago.</p>\n<p>At this point in time, most people don\u2019t seem to relate to their online content. It\u2019s expendable. We need to make them care about it, and simpler tooling, on it\u2019s own, will not help with the lack of emotional connection.</p>\n\n\n<ol><li><p><a href=\"https://www.youtube.com/watch?v=KqFNbCcyFkk\">https://www.youtube.com/watch?v=KqFNbCcyFkk</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.vintage-radio.com/repair-restore-information/valve_capacitors.html\">http://www.vintage-radio.com/repair-restore-information/valve_capacitors.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.mir.com.my/rb/photography/\">http://www.mir.com.my/rb/photography/</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"https://stackingthebricks.com/how-blogs-broke-the-web/\">https://stackingthebricks.com/how-blogs-broke-the-web/</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://duckduckgo.com/html/\">https://duckduckgo.com/html/</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html\">http://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"http://code.divshot.com/geo-bootstrap/\">http://code.divshot.com/geo-bootstrap/</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"http://anildash.com/2012/12/the-web-we-lost.html\">http://anildash.com/2012/12/the-web-we-lost.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/\">https://indieweb.org</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html\">http://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref10\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/IKEA_effect\">https://en.wikipedia.org/wiki/IKEA_effect</a><a href=\"https://petermolnar.superfeedr.com/#fnref11\">\u21a9</a></p></li>\n</ol>", "text": "There is a video out there, titled The Fall of The Simpsons: How it Happened1. It starts by introducing a mediocre show that airs every night, called \u201cThe Simpsons\u201d, and compares it to a genius cartoon, that used to air in the early 90s, called \u201cThe Simpsons\u201d. Watch the video, because it\u2019s good, and I\u2019m about to use it\u2019s conclusion.\nIt reckons that the tremendous difference is due to shrinking layers in jokes, and, more importantly, in the characters after season 7. I believe something similar happened online, which made the Internet become the internet.\nMany moons ago, while still living in London, the pedal of our flatmate\u2019s sewing machine broke down, and I started digging for replacement parts for her. I stumbled upon a detailed website about ancient capacitors2. It resembled other, gorgeous sources of knowledge: one of my all time favourite is leofoo\u2019s site on historical Nikon equipment3. All decades old sites, containing specialist level knowledge on topics only used to be found in books in dusty corners of forgotten libraries.\nThere\u2019s an interesting article about how chronological ordering destroyed the original way of curating content4 during the early online era, and I think the article got many things right. Try to imagine a slow web: slow connection, slow updates, slow everything. Take away social networks - no Twitter, no Facebook. Forget news aggregators: no more Hacker News or Reddit, not even Technorati. Grab your laptop and put in down on a desk, preferably in a corner - you\u2019re not allowed to move it. Use the HTML version of DuckDuckGo5 to search, and navigate with links from one site to another. That\u2019s how it was like; surfing on the information highway, and if you really want to experience it, UbuWeb6 will allow you to do so.\nMost of the content was hand crafted, arranged to be readable, not searchable; it was human first, not machine first. Nearly everything online had a lot of effort put into it, even if the result was eye-blowing red text on blue background7; somebody worked a lot on it. If you wanted it out there you learnt HTML, how to use FTP, how to link, how to format your page.\nWe used to have homepages. Homes on the Internet. Not profiles, no; profile is something the authorities make about you in dossier.\n6 years ago Anil Dash released a video, \u201cThe web we lost\u201d8 and lamented the web 2.0 - I despise this phrase; a horrible buzzword everyone used to label anything with; if you put \u2018cloud\u2019 and \u2018blockchain\u2019 together, you\u2019ll get the level of buzz that was \u2018web 2.0\u2019 -, that fall short to social media, but make no mistake: the Internet, the carefully laboured web 1.0, had already went underground when tools made it simple for anyone to publish with just a few clicks.\nThe social web lost against social media, because it didn\u2019t (couldn\u2019t?) keep up with making things even simpler. Always on, always instant, always present. It served the purpose of a disposable web perfectly, where the most common goal is to seek fame, attention, to follow trends, to gain followers.\nThere are people who never gave up, and are still tirelessly building tools, protocols, ideas, to lead people out of social media. The IndieWeb9\u2019s goals are simple: own your data, have an online home, and connect with others through this. And so it\u2019s completely reasonable to hear:\n\nI want blogging to be as easy as tweeting.10\n\nBut\u2026 what will this really achieve? This may sound rude and elitist, but the more I think about it the more I believe: the true way out of the swamp of social media is for things to require a little effort.\nTo make people think about what they produce, to make them connect to their online content. It\u2019s like IKEA11: once you put time, and a minor amount of sweat - or swearing - into it, it\u2019ll feel more yours, than something comfortably delivered.\nThe Internet is still present, but it\u2019s shrinking. Content people really care about, customised looking homepages, carefully curated photo galleries are all diminishing. It would be fantastic to return to a world of personal websites, but that needs the love and work that used to be put into them, just like 20 years ago.\nAt this point in time, most people don\u2019t seem to relate to their online content. It\u2019s expendable. We need to make them care about it, and simpler tooling, on it\u2019s own, will not help with the lack of emotional connection.\n\n\nhttps://www.youtube.com/watch?v=KqFNbCcyFkk\u21a9\nhttp://www.vintage-radio.com/repair-restore-information/valve_capacitors.html\u21a9\nhttp://www.mir.com.my/rb/photography/\u21a9\nhttps://stackingthebricks.com/how-blogs-broke-the-web/\u21a9\nhttps://duckduckgo.com/html/\u21a9\nhttp://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html\u21a9\nhttp://code.divshot.com/geo-bootstrap/\u21a9\nhttp://anildash.com/2012/12/the-web-we-lost.html\u21a9\nhttps://indieweb.org\u21a9\nhttp://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html\u21a9\nhttps://en.wikipedia.org/wiki/IKEA_effect\u21a9" }, "name": "The internet that took over the Internet", "post-type": "article", "_id": "1124858", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/running-a-static-indieweb-site/", "published": "2018-08-07T18:33:00+01:00", "content": { "html": "<p>In 2016, I decided to leave WordPress behind. Some of their philosophy, mostly the \u201cdecisions, not options\u201d started to leave the trail I thought to be the right one, but on it\u2019s own, that wouldn\u2019t have been enough: I had a painful experience with media handling hooks, which were respected on the frontend, and not on the backend, at which point, after staring at the backend code for days, I made up my mind: let\u2019s write a static generator.</p>\n<p>This was strictly scratching my own itches<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a>: I wanted to learn Python, but keep using tools, like exiftool and Pandoc, so instead of getting an off the shelf solution, I did actually write my own \u201cstatic generator\u201d - in the end, it\u2019s a glorified script.</p>\n<p>Since the initial idea, I rewrote that script nearly 4 times, mainly to try out language features, async workers for processing, etc, and I\u2019ve learnt a few things in the process. It is called NASG - short for \u2018not another static generator\u2019, and it lives on Github<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>, if anyone wants to see it.</p>\n<p>Here are my learnings.</p>\n<h2>Learning to embrace \u201cbuying in\u201d</h2>\n<h3>webmentions</h3>\n<p>I made a small Python daemon to handle certain requests; one of these routings was to handle incoming webmentions<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a>. It merely put the requests in a queue - apart from some initial sanity checks on the POST request itself -, but it still needed a dynamic part.</p>\n<p>This approach also required parsing the source websites on build. After countless iterations - changing parsing libraries, first within Python, then using XRay<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a> - I had a completely unrelated talk with a fellow sysadmin on how bad we are when in comes to \u201cbuying into\u201d a solution. Basically if you feel like you can do it yourself it\u2019s rather hard for us to pay someone - instead we tend to learn it and just do it, let it be piping in the house of sensor automation.</p>\n<p>None of these - webmentions, syndication, websub - are vital for my site. Do I really need to handle all of them myself? If I make it sure I can replace them, if the service goes out of business, why not use them?</p>\n<p>With that in mind, I decided to use webmention.io<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a> as my incoming webmention (<em>it even gave pingback support back</em>) handler. I ask the service for any new comments on build, save them as YAML + Markdown, so the next time I only need to parse the new ones.</p>\n<p>To send webmentions, Telegraph<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> is nice, simple service, that offers an API access, so you don\u2019t have to deal with webmention endpoint discovery. I put down a text file, with slugified names of the source and target URLs to prevent sending the mention any time.</p>\n<h3>websub</h3>\n<p>In case of websub<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a> superfeedr<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> does the job quite well.</p>\n<h3>syndication</h3>\n<p>For syndication, I decided to go with IFTTT<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a> brid.gy publish<a href=\"https://petermolnar.superfeedr.com/#fn10\">10</a>. IFTTT reads my RSS feed(s) and either creates link-only posts on WordPress<a href=\"https://petermolnar.superfeedr.com/#fn11\">11</a> and Tumblr<a href=\"https://petermolnar.superfeedr.com/#fn12\">12</a>, or sends webmentions to brid.gy to publish to links Twitter<a href=\"https://petermolnar.superfeedr.com/#fn13\">13</a> and complete photos to Flickr<a href=\"https://petermolnar.superfeedr.com/#fn14\">14</a></p>\n<p>I ended up outsourcing my newsletter as well. Years ago I sent a mail around to friends to ask them if they want updates from my site in mail; a few of them did. Unfortunately Google started putting these in either Spam or Promitions, so it never reached people; the very same happened with Blogtrottr<a href=\"https://petermolnar.superfeedr.com/#fn15\">15</a> mails. To overcome this, I set up a Google Group, where only my Gmail account can post, but anyone can subscribe, and another IFTTT hook<a href=\"https://petermolnar.superfeedr.com/#fn16\">16</a> that sends mails to that group with the contents of anything new in my RSS feed.</p>\n<h2>Search: keep it server side</h2>\n<p>I spent days looking for a way to integrate JavaScript based search (lunr.js or elasticlunr.js) in my site. I went as far as embedding JS in Python to pre-populate a search index - but to my horror, that index was 7.8MB at it\u2019s smallest size.</p>\n<p>It turns out that the simplest solution is what I already had: SQLite, but it needed some alterations.</p>\n<p>The initial solution required a small Python daemon to run in the background and spit extremely simple results back for a query. Besides the trouble of running another daemon, it needed the copy of the nasg git tree for the templates, a virtualenv for sanic (the HTTP server engine I used), and Jinja2 (templating), and a few other bits.</p>\n<p>However, there is a simpler, yet uglier solution. Nearly every webserver out in the wild has PHP support these days, including mine, because I\u2019m still running WordPress for friends and family.</p>\n<p>To overcome the problem, I made a Jinja2 template, that creates a PHP file, which read-only reads the SQLite file I pre-populate with the search corpus during build. Unfortunately it\u2019s PHP 7.0, so instead of the FTS5 engine, I had to step back and use the FTS4 - still good enough. Apart from a plain, dead simple PHP engine that has SQLite support, there is no need for anything else, and because the SQLite file is read-only, there\u2019s no lock-collision issue either.</p>\n<h2>About those markup languages\u2026</h2>\n<h3>YAML can get messy</h3>\n<p>I went with the most common post format for static sites: YAML metadata + Markdown. Soon I started seeing weird errors with \u2019 and \" characters, so I dug into the YAML specification - don\u2019t do it, it\u2019s a hell dimension. There is a subset of YAML, title StrictYAML<a href=\"https://petermolnar.superfeedr.com/#fn17\">17</a> to address some of these problems, but the short summary is: YAML or not, try to use as simple markup as possible, and be consistent.</p>\n<pre><code>title: post title\nsummary: single-line long summary\npublished: 2018-08-07T10:00:00+00:00\ntags:\n- indieweb\nsyndicate:\n- https://something.com/xyz</code></pre>\n<p>If one decides to use lists by newline and <code>-</code>, stick to that. No inline <code>[]</code> lists, no spaced <code>-</code> prefix; be consistent.</p>\n<p>Same applies for dates and times. While I thought the \u201ccorrect\u201d date format is ISO 8601, that turned out to be a subset of it, named RFC 3339<a href=\"https://petermolnar.superfeedr.com/#fn18\">18</a>. Unfortunately I started using <code>+0000</code> format instead of <code>+00:00</code> from the beginning, so I\u2019ll stick to that.</p>\n<h3>Markdown can also get messy</h3>\n<p>There are valid arguments against Markdown<a href=\"https://petermolnar.superfeedr.com/#fn19\">19</a>, so before choosing that as my main format, I tested as many as I could<a href=\"https://petermolnar.superfeedr.com/#fn20\">20</a> - in the end, I decided to stick to an extended version of Markdown, because that is still the closest-to-plain-text for my eyes. I also found Typora, which is a very nice Markdown WYSIWYG editor<a href=\"https://petermolnar.superfeedr.com/#fn21\">21</a>. <em>Yes, unfortunately, it\u2019s electron based. I\u2019ll swallow this frog for now.</em></p>\n<p>The \u201cextensions\u201d I use with Markdown:</p>\n<ul><li>footnotes - <em>my links are foonotes, so they can be printed</em></li>\n<li>pipe_tables</li>\n<li>strikeout - <em><del>cause it\u2019s useful for snarky lines</del></em></li>\n<li>raw_html</li>\n<li>definition_lists - <em>they are useful, and they were also present on the very first website ever</em></li>\n<li>backtick_code_blocks - <em>``` type code blocks</em></li>\n<li>fenced_code_attributes - <em>language tag for code blocks</em></li>\n<li>lists_without_preceding_blankline</li>\n<li>autolink_bare_uris - <em>otherwise my URLs in the footnotes are mere text</em></li>\n</ul><p>I\u2019ve tried using the Python Markdown module; the end result was utterly broken HTML when I had code blocks with regexes that collided with the regexes Python Markdown was using. I tried the Python markdown2 module - worked better, didn\u2019t support language tag for code blocks.</p>\n<p>In the end, I went back to where I started: Pandoc<a href=\"https://petermolnar.superfeedr.com/#fn22\">22</a>. The regeneration of the whole site is ~60 seconds instead of ~20s with markdown2, but it doesn\u2019t really matter - it\u2019s still fast.</p>\n<pre><code>pandoc --to=html5 --quiet --no-highlight --from=markdown+footnotes+pipe_tables+strikeout+raw_html+definition_lists+backtick_code_blocks+fenced_code_attributes+lists_without_preceding_blankline+autolink_bare_uris</code></pre>\n<p>The take away is the same with YAML: do your own ruleset and stick to it; don\u2019t mix other flavours in.</p>\n<h3>Syntax highlighting is really messy</h3>\n<p>Pandoc has a built-in syntax highlighting method; so does the Python Markdown module (via Codehilite).</p>\n<p>I have some entries that can break both, and break them bad.</p>\n<p>Besides broken, Codehilite is VERBOSE. At a certain point, it managed to add 60KB of HTML markup to my text.</p>\n<p>A long while ago I tried to completely eliminate JavaScript from my site, because I\u2019m tired of the current trends. However, JS has it\u2019s place, especially as a progessive enhancement<a href=\"https://petermolnar.superfeedr.com/#fn23\">23</a>.</p>\n<p>That in mind, I went back to the solution that worked the best so far: prism.js<a href=\"https://petermolnar.superfeedr.com/#fn24\">24</a> The difference this time I that I only add it when there is a code block with language property, and I inline the whole JS block in the code - the \u2018developer\u2019 version, supporting a lot of languages, weighs around 58KB, which is a lot, but it works very nice, and it very fast.</p>\n<p>No JS only means no syntax highlight, but at least my HTML code is readable, unlike with CodeHilite.</p>\n<h2>Summary</h2>\n<p>Static sites come with compromises when it comes to interactions, let that be webmentions, search, pubsub. They need either external services, or some simple, dynamic parts.</p>\n<p>If you do go with dynamic, try to keep it as simple as possible. If the webserver has PHP support avoid adding a Python daemon and use that PHP instead.</p>\n<p>There are very good, completely free services out there, run by <del>mad scientists</del> enthusiasts, like webmention.io and brid.gy. It\u2019s perfectly fine to use them.</p>\n<p>Keep your markup consistent and don\u2019t deviate from the feature set you really need.</p>\n<p>JavaScript has it\u2019s place, and prism.js is potentially the nicest syntax highlighter currently available for the web.</p>\n\n\n<ol><li><p><a href=\"https://indieweb.org/scratch_your_own_itch\">https://indieweb.org/scratch_your_own_itch</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://github.com/petermolnar/nasg/\">https://github.com/petermolnar/nasg/</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"http://indieweb.org/webmention\">http://indieweb.org/webmention</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"https://github.com/aaronpk/xray\">https://github.com/aaronpk/xray</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://webmention.io/\">https://webmention.io/</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"http://telegraph.p3k.io/\">http://telegraph.p3k.io/</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/websub\">https://indieweb.org/websub</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"https://superfeedr.com/\">https://superfeedr.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"http://ifttt.com/\">http://ifttt.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n<li><p><a href=\"https://brid.gy/about#publishing\">https://brid.gy/about#publishing</a><a href=\"https://petermolnar.superfeedr.com/#fnref10\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83096071d-syndicate-to-wordpress-com\">https://ifttt.com/applets/83096071d-syndicate-to-wordpress-com</a><a href=\"https://petermolnar.superfeedr.com/#fnref11\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095945d-syndicate-to-tumblr\">https://ifttt.com/applets/83095945d-syndicate-to-tumblr</a><a href=\"https://petermolnar.superfeedr.com/#fnref12\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish\">https://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish</a><a href=\"https://petermolnar.superfeedr.com/#fnref13\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr\">https://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr</a><a href=\"https://petermolnar.superfeedr.com/#fnref14\">\u21a9</a></p></li>\n<li><p><a href=\"https://blogtrottr.com/\">https://blogtrottr.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref15\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com\">https://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com</a><a href=\"https://petermolnar.superfeedr.com/#fnref16\">\u21a9</a></p></li>\n<li><p><a href=\"http://hitchdev.com/strictyaml/features-removed/\">http://hitchdev.com/strictyaml/features-removed/</a><a href=\"https://petermolnar.superfeedr.com/#fnref17\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/RFC_3339\">https://en.wikipedia.org/wiki/RFC_3339</a><a href=\"https://petermolnar.superfeedr.com/#fnref18\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/markdown#Criticism\">https://indieweb.org/markdown#Criticism</a><a href=\"https://petermolnar.superfeedr.com/#fnref19\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/List_of_lightweight_markup_languages\">https://en.wikipedia.org/wiki/List_of_lightweight_markup_languages</a><a href=\"https://petermolnar.superfeedr.com/#fnref20\">\u21a9</a></p></li>\n<li><p><a href=\"http://typora.io/\">http://typora.io/</a><a href=\"https://petermolnar.superfeedr.com/#fnref21\">\u21a9</a></p></li>\n<li><p><a href=\"http://pandoc.org/MANUAL.html#pandocs-markdown\">http://pandoc.org/MANUAL.html#pandocs-markdown</a><a href=\"https://petermolnar.superfeedr.com/#fnref22\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/Progressive_enhancement\">https://en.wikipedia.org/wiki/Progressive_enhancement</a><a href=\"https://petermolnar.superfeedr.com/#fnref23\">\u21a9</a></p></li>\n<li><p><a href=\"https://prismjs.com/\">https://prismjs.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref24\">\u21a9</a></p></li>\n</ol>", "text": "In 2016, I decided to leave WordPress behind. Some of their philosophy, mostly the \u201cdecisions, not options\u201d started to leave the trail I thought to be the right one, but on it\u2019s own, that wouldn\u2019t have been enough: I had a painful experience with media handling hooks, which were respected on the frontend, and not on the backend, at which point, after staring at the backend code for days, I made up my mind: let\u2019s write a static generator.\nThis was strictly scratching my own itches1: I wanted to learn Python, but keep using tools, like exiftool and Pandoc, so instead of getting an off the shelf solution, I did actually write my own \u201cstatic generator\u201d - in the end, it\u2019s a glorified script.\nSince the initial idea, I rewrote that script nearly 4 times, mainly to try out language features, async workers for processing, etc, and I\u2019ve learnt a few things in the process. It is called NASG - short for \u2018not another static generator\u2019, and it lives on Github2, if anyone wants to see it.\nHere are my learnings.\nLearning to embrace \u201cbuying in\u201d\nwebmentions\nI made a small Python daemon to handle certain requests; one of these routings was to handle incoming webmentions3. It merely put the requests in a queue - apart from some initial sanity checks on the POST request itself -, but it still needed a dynamic part.\nThis approach also required parsing the source websites on build. After countless iterations - changing parsing libraries, first within Python, then using XRay4 - I had a completely unrelated talk with a fellow sysadmin on how bad we are when in comes to \u201cbuying into\u201d a solution. Basically if you feel like you can do it yourself it\u2019s rather hard for us to pay someone - instead we tend to learn it and just do it, let it be piping in the house of sensor automation.\nNone of these - webmentions, syndication, websub - are vital for my site. Do I really need to handle all of them myself? If I make it sure I can replace them, if the service goes out of business, why not use them?\nWith that in mind, I decided to use webmention.io5 as my incoming webmention (it even gave pingback support back) handler. I ask the service for any new comments on build, save them as YAML + Markdown, so the next time I only need to parse the new ones.\nTo send webmentions, Telegraph6 is nice, simple service, that offers an API access, so you don\u2019t have to deal with webmention endpoint discovery. I put down a text file, with slugified names of the source and target URLs to prevent sending the mention any time.\nwebsub\nIn case of websub7 superfeedr8 does the job quite well.\nsyndication\nFor syndication, I decided to go with IFTTT9 brid.gy publish10. IFTTT reads my RSS feed(s) and either creates link-only posts on WordPress11 and Tumblr12, or sends webmentions to brid.gy to publish to links Twitter13 and complete photos to Flickr14\nI ended up outsourcing my newsletter as well. Years ago I sent a mail around to friends to ask them if they want updates from my site in mail; a few of them did. Unfortunately Google started putting these in either Spam or Promitions, so it never reached people; the very same happened with Blogtrottr15 mails. To overcome this, I set up a Google Group, where only my Gmail account can post, but anyone can subscribe, and another IFTTT hook16 that sends mails to that group with the contents of anything new in my RSS feed.\nSearch: keep it server side\nI spent days looking for a way to integrate JavaScript based search (lunr.js or elasticlunr.js) in my site. I went as far as embedding JS in Python to pre-populate a search index - but to my horror, that index was 7.8MB at it\u2019s smallest size.\nIt turns out that the simplest solution is what I already had: SQLite, but it needed some alterations.\nThe initial solution required a small Python daemon to run in the background and spit extremely simple results back for a query. Besides the trouble of running another daemon, it needed the copy of the nasg git tree for the templates, a virtualenv for sanic (the HTTP server engine I used), and Jinja2 (templating), and a few other bits.\nHowever, there is a simpler, yet uglier solution. Nearly every webserver out in the wild has PHP support these days, including mine, because I\u2019m still running WordPress for friends and family.\nTo overcome the problem, I made a Jinja2 template, that creates a PHP file, which read-only reads the SQLite file I pre-populate with the search corpus during build. Unfortunately it\u2019s PHP 7.0, so instead of the FTS5 engine, I had to step back and use the FTS4 - still good enough. Apart from a plain, dead simple PHP engine that has SQLite support, there is no need for anything else, and because the SQLite file is read-only, there\u2019s no lock-collision issue either.\nAbout those markup languages\u2026\nYAML can get messy\nI went with the most common post format for static sites: YAML metadata + Markdown. Soon I started seeing weird errors with \u2019 and \" characters, so I dug into the YAML specification - don\u2019t do it, it\u2019s a hell dimension. There is a subset of YAML, title StrictYAML17 to address some of these problems, but the short summary is: YAML or not, try to use as simple markup as possible, and be consistent.\ntitle: post title\nsummary: single-line long summary\npublished: 2018-08-07T10:00:00+00:00\ntags:\n- indieweb\nsyndicate:\n- https://something.com/xyz\nIf one decides to use lists by newline and -, stick to that. No inline [] lists, no spaced - prefix; be consistent.\nSame applies for dates and times. While I thought the \u201ccorrect\u201d date format is ISO 8601, that turned out to be a subset of it, named RFC 333918. Unfortunately I started using +0000 format instead of +00:00 from the beginning, so I\u2019ll stick to that.\nMarkdown can also get messy\nThere are valid arguments against Markdown19, so before choosing that as my main format, I tested as many as I could20 - in the end, I decided to stick to an extended version of Markdown, because that is still the closest-to-plain-text for my eyes. I also found Typora, which is a very nice Markdown WYSIWYG editor21. Yes, unfortunately, it\u2019s electron based. I\u2019ll swallow this frog for now.\nThe \u201cextensions\u201d I use with Markdown:\nfootnotes - my links are foonotes, so they can be printed\npipe_tables\nstrikeout - cause it\u2019s useful for snarky lines\nraw_html\ndefinition_lists - they are useful, and they were also present on the very first website ever\nbacktick_code_blocks - ``` type code blocks\nfenced_code_attributes - language tag for code blocks\nlists_without_preceding_blankline\nautolink_bare_uris - otherwise my URLs in the footnotes are mere text\nI\u2019ve tried using the Python Markdown module; the end result was utterly broken HTML when I had code blocks with regexes that collided with the regexes Python Markdown was using. I tried the Python markdown2 module - worked better, didn\u2019t support language tag for code blocks.\nIn the end, I went back to where I started: Pandoc22. The regeneration of the whole site is ~60 seconds instead of ~20s with markdown2, but it doesn\u2019t really matter - it\u2019s still fast.\npandoc --to=html5 --quiet --no-highlight --from=markdown+footnotes+pipe_tables+strikeout+raw_html+definition_lists+backtick_code_blocks+fenced_code_attributes+lists_without_preceding_blankline+autolink_bare_uris\nThe take away is the same with YAML: do your own ruleset and stick to it; don\u2019t mix other flavours in.\nSyntax highlighting is really messy\nPandoc has a built-in syntax highlighting method; so does the Python Markdown module (via Codehilite).\nI have some entries that can break both, and break them bad.\nBesides broken, Codehilite is VERBOSE. At a certain point, it managed to add 60KB of HTML markup to my text.\nA long while ago I tried to completely eliminate JavaScript from my site, because I\u2019m tired of the current trends. However, JS has it\u2019s place, especially as a progessive enhancement23.\nThat in mind, I went back to the solution that worked the best so far: prism.js24 The difference this time I that I only add it when there is a code block with language property, and I inline the whole JS block in the code - the \u2018developer\u2019 version, supporting a lot of languages, weighs around 58KB, which is a lot, but it works very nice, and it very fast.\nNo JS only means no syntax highlight, but at least my HTML code is readable, unlike with CodeHilite.\nSummary\nStatic sites come with compromises when it comes to interactions, let that be webmentions, search, pubsub. They need either external services, or some simple, dynamic parts.\nIf you do go with dynamic, try to keep it as simple as possible. If the webserver has PHP support avoid adding a Python daemon and use that PHP instead.\nThere are very good, completely free services out there, run by mad scientists enthusiasts, like webmention.io and brid.gy. It\u2019s perfectly fine to use them.\nKeep your markup consistent and don\u2019t deviate from the feature set you really need.\nJavaScript has it\u2019s place, and prism.js is potentially the nicest syntax highlighter currently available for the web.\n\n\nhttps://indieweb.org/scratch_your_own_itch\u21a9\nhttps://github.com/petermolnar/nasg/\u21a9\nhttp://indieweb.org/webmention\u21a9\nhttps://github.com/aaronpk/xray\u21a9\nhttps://webmention.io/\u21a9\nhttp://telegraph.p3k.io/\u21a9\nhttps://indieweb.org/websub\u21a9\nhttps://superfeedr.com/\u21a9\nhttp://ifttt.com/\u21a9\nhttps://brid.gy/about#publishing\u21a9\nhttps://ifttt.com/applets/83096071d-syndicate-to-wordpress-com\u21a9\nhttps://ifttt.com/applets/83095945d-syndicate-to-tumblr\u21a9\nhttps://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish\u21a9\nhttps://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr\u21a9\nhttps://blogtrottr.com/\u21a9\nhttps://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com\u21a9\nhttp://hitchdev.com/strictyaml/features-removed/\u21a9\nhttps://en.wikipedia.org/wiki/RFC_3339\u21a9\nhttps://indieweb.org/markdown#Criticism\u21a9\nhttps://en.wikipedia.org/wiki/List_of_lightweight_markup_languages\u21a9\nhttp://typora.io/\u21a9\nhttp://pandoc.org/MANUAL.html#pandocs-markdown\u21a9\nhttps://en.wikipedia.org/wiki/Progressive_enhancement\u21a9\nhttps://prismjs.com/\u21a9" }, "name": "Lessons of running a (semi) static, Indieweb-friendly site for 2 years", "post-type": "article", "_id": "1124868", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/location-tracking-without-server/", "published": "2018-09-27T11:05:00+01:00", "content": { "html": "<p>Nearly all self-hosted location tracking Android applications are based on server-client architecture: the one on the phone collects only a small points, if not only one, and sends it to a configured server. Traccar<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a>, Owntracks<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>, etc.</p>\n<p>While this setup is useful, it doesn\u2019t fit in my static, unless it hurts<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a> approach, and it needs data connectivity, which can be tricky during abroad trips. The rare occasions in rural Scotland and Wales tought me data connectivity is not omnipresent at all.</p>\n<p>There used to be a magnificent little location tracker, which, besides the server-client approach, could store the location data in CSV and KML files locally: Backitude<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a>. The program is gone from Play store, I have no idea, why, but I have a copy of the last APK of it<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a>.</p>\n<p>My flow is the following:</p>\n<ul><li>Backitude saves the CSV files</li>\n<li>Syncthing<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> syncs the phone and the laptop</li>\n<li>the laptop has a Python script that imports the CSV into SQLite to eliminate duplicates</li>\n<li>the same script queries against Bing to get altitude information for missing altitudes</li>\n<li>as a final step, the script exports daily GPX files</li>\n<li>on the laptop, GpsPrune helps me visualize and measure trips</li>\n</ul><h2>Backitude configuration</h2>\n<p>These are the modified setting properties:</p>\n<ul><li>Enable backitude: yes</li>\n<li>Settings\n<ul><li>Standard Mode Settings\n<ul><li>Time Interval Selection: 1 minute</li>\n<li>Location Polling Timeout: 5 minutes</li>\n<li>Display update message: no</li>\n</ul></li>\n<li>Wifi Mode Settings\n<ul><li>Wi-Fi Mode Enabled: yes</li>\n<li>Time Interval Options: 1 hour</li>\n<li>Location Polling Timeout: 5 minutes</li>\n</ul></li>\n<li>Update Settings\n<ul><li>Minimum Change in Distance: 10 meters</li>\n</ul></li>\n<li>Accuracy Settings\n<ul><li>Minimum GPS accuracy: 12 meters</li>\n<li>Minimum Wi-Fi accuracy: 20 meters</li>\n</ul></li>\n<li>Internal Memory Storage Options\n<ul><li>KML and CSV</li>\n</ul></li>\n<li>Display Failure Notifications: no</li>\n</ul></li>\n</ul><p>I have an exported preferences file available<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a>.</p>\n<h2>Syncthing</h2>\n<p>The syncthing configuration is optional; it could be simple done by manual transfers from the phone. It\u2019s also not the most simple thing to do, so I\u2019ll let the Syncting Documentation<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> take care of describing the how-tos.</p>\n<h2>Python script</h2>\n<p>Before jumping to the script, there are 3 Python modules it needs:</p>\n<pre><code>pip3 install --user arrow gpxpy requests</code></pre>\n<p>And the script itself - please replace the <code>INBASE</code>, <code>OUTBASE</code>, and <code>BINGKEY</code> properties. To get a Bing key, visit Bing<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a>.</p>\n<pre><code>import os\nimport sqlite3\nimport csv\nimport glob\nimport arrow\nimport re\nimport gpxpy.gpx\nimport requests\n\nINBASE=\"/path/to/your/syncthing/gps/files\"\nOUTBASE=\"/path/for/sqlite/and/gpx/output\"\nBINGKEY=\"get a bing maps key and insert it here\"\n\ndef parse(row):\n DATE = re.compile(\n r'^(?P<year>[0-9]{4})-(?P<month>[0-9]{2})-(?P<day>[0-9]{2})T'\n r'(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2})\\.(?P<subsec>[0-9]{3})Z$'\n )\n\n lat = row[0]\n lon = row[1]\n acc = row[2]\n alt = row[3]\n match = DATE.match(row[4])\n # in theory, arrow should have been able to parse the date, but I couldn't get\n # it working\n epoch = arrow.get(\"%s-%s-%s %s %s\" % (\n match.group('year'),\n match.group('month'),\n match.group('day'),\n match.group('time'),\n match.group('subsec')\n ), 'YYYY-MM-DD hh:mm:ss SSS').timestamp\n return(epoch,lat,lon,alt,acc)\n\ndef exists(db, epoch, lat, lon):\n return db.execute('''\n SELECT\n *\n FROM\n data\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n ''', (epoch, lat, lon)).fetchone()\n\ndef ins(db, epoch,lat,lon,alt,acc):\n if exists(db, epoch, lat, lon):\n return\n print('inserting data point with epoch %d' % (epoch))\n db.execute('''INSERT INTO data (epoch, latitude, longitude, altitude, accuracy) VALUES (?,?,?,?,?);''', (\n epoch,\n lat,\n lon,\n alt,\n acc\n ))\n\n\nif __name__ == '__main__':\n db = sqlite3.connect(os.path.join(OUTBASE, 'location-log.sqlite'))\n db.execute('PRAGMA auto_vacuum = INCREMENTAL;')\n db.execute('PRAGMA journal_mode = MEMORY;')\n db.execute('PRAGMA temp_store = MEMORY;')\n db.execute('PRAGMA locking_mode = NORMAL;')\n db.execute('PRAGMA synchronous = FULL;')\n db.execute('PRAGMA encoding = \"UTF-8\";')\n\n files = glob.glob(os.path.join(INBASE, '*.csv'))\n for logfile in files:\n with open(logfile) as csvfile:\n try:\n reader = csv.reader(csvfile)\n except Exception as e:\n print('failed to open CSV reader for file: %s; %s' % (logfile, e))\n continue\n # skip the first row, that's headers\n headers = next(reader, None)\n for row in reader:\n epoch,lat,lon,alt,acc = parse(row)\n ins(db,epoch,lat,lon,alt,acc)\n # there's no need to commit per line, per file should be safe enough\n db.commit()\n\n db.execute('PRAGMA auto_vacuum;')\n\n results = db.execute('''\n SELECT\n *\n FROM\n data\n ORDER BY epoch ASC''').fetchall()\n prevdate = None\n gpx = gpxpy.gpx.GPX()\n\n for epoch, lat, lon, alt, acc in results:\n # in case you know your altitude might actually be valid with negative\n # values you may want to remove the -10\n if alt == 'NULL' or alt < -10:\n url = \"http://dev.virtualearth.net/REST/v1/Elevation/List?points=%s,%s&key=%s\" % (\n lat,\n lon,\n BINGKEY\n )\n bing = requests.get(url).json()\n # gotta love enterprise API endpoints\n if not bing or \\\n 'resourceSets' not in bing or \\\n not len(bing['resourceSets']) or \\\n 'resources' not in bing['resourceSets'][0] or \\\n not len(bing['resourceSets'][0]) or \\\n 'elevations' not in bing['resourceSets'][0]['resources'][0] or \\\n not bing['resourceSets'][0]['resources'][0]['elevations']:\n alt = 0\n else:\n alt = float(bing['resourceSets'][0]['resources'][0]['elevations'][0])\n print('got altitude from bing: %s for %s,%s' % (alt,lat,lon))\n db.execute('''\n UPDATE\n data\n SET\n altitude = ?\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n LIMIT 1\n ''',(alt, epoch, lat, lon))\n db.commit()\n del(bing)\n del(url)\n date = arrow.get(epoch).format('YYYY-MM-DD')\n if not prevdate or prevdate != date:\n # write previous out\n gpxfile = os.path.join(OUTBASE, \"%s.gpx\" % (date))\n with open(gpxfile, 'wt') as f:\n f.write(gpx.to_xml())\n print('created file: %s' % gpxfile)\n\n # create new\n gpx = gpxpy.gpx.GPX()\n prevdate = date\n\n # Create first track in our GPX:\n gpx_track = gpxpy.gpx.GPXTrack()\n gpx.tracks.append(gpx_track)\n\n # Create first segment in our GPX track:\n gpx_segment = gpxpy.gpx.GPXTrackSegment()\n gpx_track.segments.append(gpx_segment)\n\n # Create points:\n gpx_segment.points.append(\n gpxpy.gpx.GPXTrackPoint(\n lat,\n lon,\n elevation=alt,\n time=arrow.get(epoch).datetime\n )\n )\n\n db.close()\n</code></pre>\n<p>Once this is done, the <code>OUTBASE</code> directory will be populated by <code>.gpx</code> files, one per day.</p>\n<h2>GpsPrune</h2>\n<p>GpsPrune is a desktop, QT based GPX track visualizer. It needs data connectivity to have nice maps in the background, but it can do a lot of funky things, including editing GPX tracks.</p>\n<pre><code>sudo apt install gpsprune</code></pre>\n<p><strong>Keep it in mind that the export script overwrites the GPX files, so the data needs to be fixed in the SQLite database.</strong></p>\n<p>This is an example screenshot of GpsPrune, about our 2 day walk down from Mount Emei and it\u2019s endless stairs:</p>\n<a href=\"https://petermolnar.net/location-tracking-without-server/emei_b.jpg\"> <img src=\"https://aperture-proxy.p3k.io/fb0eec6f7ce86a08fc973871c604e30bbf8b8c69/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6c6f636174696f6e2d747261636b696e672d776974686f75742d7365727665722f656d65692e6a7067\" title=\"emei\" alt=\"\" /></a>\n\nemei\n<p>Happy tracking!</p>\n\n\n<ol><li><p><a href=\"https://www.traccar.org/\">https://www.traccar.org/</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://owntracks.org/\">https://owntracks.org/</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/manual_until_it_hurts\">https://indieweb.org/manual_until_it_hurts</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.gpsies.com/backitude.do\">http://www.gpsies.com/backitude.do</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://petermolnar.superfeedr.com/gaugler.backitude.apk\">gaugler.backitude.apk</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"https://syncthing.net/\">https://syncthing.net/</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"https://petermolnar.superfeedr.com/backitude.prefs\">backitude.prefs</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"https://docs.syncthing.net/intro/getting-started.html\">https://docs.syncthing.net/intro/getting-started.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"https://msdn.microsoft.com/en-us/library/ff428642\">https://msdn.microsoft.com/en-us/library/ff428642</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n</ol>", "text": "Nearly all self-hosted location tracking Android applications are based on server-client architecture: the one on the phone collects only a small points, if not only one, and sends it to a configured server. Traccar1, Owntracks2, etc.\nWhile this setup is useful, it doesn\u2019t fit in my static, unless it hurts3 approach, and it needs data connectivity, which can be tricky during abroad trips. The rare occasions in rural Scotland and Wales tought me data connectivity is not omnipresent at all.\nThere used to be a magnificent little location tracker, which, besides the server-client approach, could store the location data in CSV and KML files locally: Backitude4. The program is gone from Play store, I have no idea, why, but I have a copy of the last APK of it5.\nMy flow is the following:\nBackitude saves the CSV files\nSyncthing6 syncs the phone and the laptop\nthe laptop has a Python script that imports the CSV into SQLite to eliminate duplicates\nthe same script queries against Bing to get altitude information for missing altitudes\nas a final step, the script exports daily GPX files\non the laptop, GpsPrune helps me visualize and measure trips\nBackitude configuration\nThese are the modified setting properties:\nEnable backitude: yes\nSettings\nStandard Mode Settings\nTime Interval Selection: 1 minute\nLocation Polling Timeout: 5 minutes\nDisplay update message: no\n\nWifi Mode Settings\nWi-Fi Mode Enabled: yes\nTime Interval Options: 1 hour\nLocation Polling Timeout: 5 minutes\n\nUpdate Settings\nMinimum Change in Distance: 10 meters\n\nAccuracy Settings\nMinimum GPS accuracy: 12 meters\nMinimum Wi-Fi accuracy: 20 meters\n\nInternal Memory Storage Options\nKML and CSV\n\nDisplay Failure Notifications: no\n\nI have an exported preferences file available7.\nSyncthing\nThe syncthing configuration is optional; it could be simple done by manual transfers from the phone. It\u2019s also not the most simple thing to do, so I\u2019ll let the Syncting Documentation8 take care of describing the how-tos.\nPython script\nBefore jumping to the script, there are 3 Python modules it needs:\npip3 install --user arrow gpxpy requests\nAnd the script itself - please replace the INBASE, OUTBASE, and BINGKEY properties. To get a Bing key, visit Bing9.\nimport os\nimport sqlite3\nimport csv\nimport glob\nimport arrow\nimport re\nimport gpxpy.gpx\nimport requests\n\nINBASE=\"/path/to/your/syncthing/gps/files\"\nOUTBASE=\"/path/for/sqlite/and/gpx/output\"\nBINGKEY=\"get a bing maps key and insert it here\"\n\ndef parse(row):\n DATE = re.compile(\n r'^(?P<year>[0-9]{4})-(?P<month>[0-9]{2})-(?P<day>[0-9]{2})T'\n r'(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2})\\.(?P<subsec>[0-9]{3})Z$'\n )\n\n lat = row[0]\n lon = row[1]\n acc = row[2]\n alt = row[3]\n match = DATE.match(row[4])\n # in theory, arrow should have been able to parse the date, but I couldn't get\n # it working\n epoch = arrow.get(\"%s-%s-%s %s %s\" % (\n match.group('year'),\n match.group('month'),\n match.group('day'),\n match.group('time'),\n match.group('subsec')\n ), 'YYYY-MM-DD hh:mm:ss SSS').timestamp\n return(epoch,lat,lon,alt,acc)\n\ndef exists(db, epoch, lat, lon):\n return db.execute('''\n SELECT\n *\n FROM\n data\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n ''', (epoch, lat, lon)).fetchone()\n\ndef ins(db, epoch,lat,lon,alt,acc):\n if exists(db, epoch, lat, lon):\n return\n print('inserting data point with epoch %d' % (epoch))\n db.execute('''INSERT INTO data (epoch, latitude, longitude, altitude, accuracy) VALUES (?,?,?,?,?);''', (\n epoch,\n lat,\n lon,\n alt,\n acc\n ))\n\n\nif __name__ == '__main__':\n db = sqlite3.connect(os.path.join(OUTBASE, 'location-log.sqlite'))\n db.execute('PRAGMA auto_vacuum = INCREMENTAL;')\n db.execute('PRAGMA journal_mode = MEMORY;')\n db.execute('PRAGMA temp_store = MEMORY;')\n db.execute('PRAGMA locking_mode = NORMAL;')\n db.execute('PRAGMA synchronous = FULL;')\n db.execute('PRAGMA encoding = \"UTF-8\";')\n\n files = glob.glob(os.path.join(INBASE, '*.csv'))\n for logfile in files:\n with open(logfile) as csvfile:\n try:\n reader = csv.reader(csvfile)\n except Exception as e:\n print('failed to open CSV reader for file: %s; %s' % (logfile, e))\n continue\n # skip the first row, that's headers\n headers = next(reader, None)\n for row in reader:\n epoch,lat,lon,alt,acc = parse(row)\n ins(db,epoch,lat,lon,alt,acc)\n # there's no need to commit per line, per file should be safe enough\n db.commit()\n\n db.execute('PRAGMA auto_vacuum;')\n\n results = db.execute('''\n SELECT\n *\n FROM\n data\n ORDER BY epoch ASC''').fetchall()\n prevdate = None\n gpx = gpxpy.gpx.GPX()\n\n for epoch, lat, lon, alt, acc in results:\n # in case you know your altitude might actually be valid with negative\n # values you may want to remove the -10\n if alt == 'NULL' or alt < -10:\n url = \"http://dev.virtualearth.net/REST/v1/Elevation/List?points=%s,%s&key=%s\" % (\n lat,\n lon,\n BINGKEY\n )\n bing = requests.get(url).json()\n # gotta love enterprise API endpoints\n if not bing or \\\n 'resourceSets' not in bing or \\\n not len(bing['resourceSets']) or \\\n 'resources' not in bing['resourceSets'][0] or \\\n not len(bing['resourceSets'][0]) or \\\n 'elevations' not in bing['resourceSets'][0]['resources'][0] or \\\n not bing['resourceSets'][0]['resources'][0]['elevations']:\n alt = 0\n else:\n alt = float(bing['resourceSets'][0]['resources'][0]['elevations'][0])\n print('got altitude from bing: %s for %s,%s' % (alt,lat,lon))\n db.execute('''\n UPDATE\n data\n SET\n altitude = ?\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n LIMIT 1\n ''',(alt, epoch, lat, lon))\n db.commit()\n del(bing)\n del(url)\n date = arrow.get(epoch).format('YYYY-MM-DD')\n if not prevdate or prevdate != date:\n # write previous out\n gpxfile = os.path.join(OUTBASE, \"%s.gpx\" % (date))\n with open(gpxfile, 'wt') as f:\n f.write(gpx.to_xml())\n print('created file: %s' % gpxfile)\n\n # create new\n gpx = gpxpy.gpx.GPX()\n prevdate = date\n\n # Create first track in our GPX:\n gpx_track = gpxpy.gpx.GPXTrack()\n gpx.tracks.append(gpx_track)\n\n # Create first segment in our GPX track:\n gpx_segment = gpxpy.gpx.GPXTrackSegment()\n gpx_track.segments.append(gpx_segment)\n\n # Create points:\n gpx_segment.points.append(\n gpxpy.gpx.GPXTrackPoint(\n lat,\n lon,\n elevation=alt,\n time=arrow.get(epoch).datetime\n )\n )\n\n db.close()\n\nOnce this is done, the OUTBASE directory will be populated by .gpx files, one per day.\nGpsPrune\nGpsPrune is a desktop, QT based GPX track visualizer. It needs data connectivity to have nice maps in the background, but it can do a lot of funky things, including editing GPX tracks.\nsudo apt install gpsprune\nKeep it in mind that the export script overwrites the GPX files, so the data needs to be fixed in the SQLite database.\nThis is an example screenshot of GpsPrune, about our 2 day walk down from Mount Emei and it\u2019s endless stairs:\n \n\nemei\nHappy tracking!\n\n\nhttps://www.traccar.org/\u21a9\nhttps://owntracks.org/\u21a9\nhttps://indieweb.org/manual_until_it_hurts\u21a9\nhttp://www.gpsies.com/backitude.do\u21a9\ngaugler.backitude.apk\u21a9\nhttps://syncthing.net/\u21a9\nbackitude.prefs\u21a9\nhttps://docs.syncthing.net/intro/getting-started.html\u21a9\nhttps://msdn.microsoft.com/en-us/library/ff428642\u21a9" }, "name": "GPS tracking without a server", "post-type": "article", "_id": "1124870", "_source": "268", "_is_read": true }
{ "type": "entry", "published": "2018-10-02T20:42:46+00:00", "url": "https://cleverdevil.io/2018/a-huge-congratulations-to-getsource-who-i", "category": [ "IndieWeb" ], "syndication": [ "https://twitter.com/cleverdevil/status/1047225419232161793" ], "content": { "text": "A huge congratulations to @GetSource, who I had the absolute pleasure of working with for 6+ years. Great hire for @GoDaddy. I hope you get a chance to work on #IndieWeb features for @WordPress! Best of luck. I'm sure you'll knock it out of the park. \ud83d\ude00\n\nhttps://twitter.com/GetSource/status/1047223678218383362", "html": "A huge congratulations to @GetSource, who I had the absolute pleasure of working with for 6+ years. Great hire for @GoDaddy. I hope you get a chance to work on <a href=\"https://cleverdevil.io/tag/IndieWeb\" class=\"p-category\">#IndieWeb</a> features for @WordPress! Best of luck. I'm sure you'll knock it out of the park. \ud83d\ude00<br /><br /><a href=\"https://twitter.com/GetSource/status/1047223678218383362\">https://twitter.com/GetSource/status/1047223678218383362</a>" }, "author": { "type": "card", "name": "Jonathan LaCour", "url": "https://cleverdevil.io/profile/cleverdevil", "photo": "https://aperture-proxy.p3k.io/77e5d6e5871324c43aebf2e3e7a5553e14578f66/68747470733a2f2f636c65766572646576696c2e696f2f66696c652f66646263373639366135663733383634656131316138323863383631653133382f7468756d622e6a7067" }, "post-type": "note", "_id": "1121855", "_source": "71", "_is_read": true }
Looking forward to another Homebrew Website Club Baltimore, tomorrow!
It’s an IndieWeb! Come learn some ways to free your content and your social sharing from the social networking silos!
{ "type": "entry", "published": "2018-10-02T16:27:40-04:00", "rsvp": "yes", "url": "https://martymcgui.re/2018/10/02/162740/", "syndication": [ "https://twitter.com/schmarty/status/1047222938947260416", "https://www.facebook.com/marty.mcguire.54/posts/10212976492508999" ], "in-reply-to": [ "https://martymcgui.re/2018/09/20/124541/" ], "content": { "text": "I'm going!Looking forward to another Homebrew Website Club Baltimore, tomorrow!\nIt\u2019s an IndieWeb! Come learn some ways to free your content and your social sharing from the social networking silos!", "html": "I'm going!<p>Looking forward to another Homebrew Website Club Baltimore, tomorrow!</p>\n<p>It\u2019s an <a href=\"https://indieweb.org/\">IndieWeb</a>! Come learn some ways to free your content and your social sharing from the social networking silos!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "rsvp", "refs": { "https://martymcgui.re/2018/09/20/124541/": { "type": "entry", "published": "2018-09-20T12:45:41-04:00", "summary": "Please note: We are meeting on Wednesday this week at 7:30pm. Be sure to double-check your calendars! Join us for an evening of quiet writing, IndieWeb demos, and discussions! Create or update your personal web site! Finish that blog post you\u2019ve been writing, edit the wiki! Demos of recent IndieWeb breakthroughs, share what you\u2019ve gotten working! Join a community with...", "url": "https://martymcgui.re/2018/09/20/124541/", "name": "Homebrew Website Club Baltimore", "author": { "type": "card", "name": "martymcgui.re", "url": "http://martymcgui.re", "photo": null }, "post-type": "article" } }, "_id": "1121266", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-10-02T15:51:50-04:00", "url": "https://martymcgui.re/2018/10/02/155150/", "category": [ "IndieWeb", "IWC", "IWCNYC", "site-update", "projects" ], "syndication": [ "https://twitter.com/schmarty/status/1047214824470528000", "https://www.facebook.com/marty.mcguire.54/posts/10212976373146015" ], "name": "Quick thoughts on project ideas from IndieWebCamp NYC 2018", "content": { "text": "I attended IndieWebCamp NYC 2018 and it was a blast! Check the schedule for links to notes and videos from the awesome keynotes, discussion sessions, and build-day demos. I am so grateful to all the other organizers, to all the new and familiar faces that came out, to those that joined us remotely, to Pace University's Seidenberg School for hosting us, and of course to the sponsors that made it all possible.\n \n\nI have a lot of thoughts about all the discussions and projects that were talked about, I'm sure. But for now, I'd like to capture some of the TODOs and project ideas that I came away with after the event, and the post-event discussions over food and drink.\n\n A Micropub Media Endpoint built on Neocities for storage and Glitch for handling uploads and metadata. It would allow folks to store 1GB of media files like photos, audio, and video for their websites, for free. It would be usable with all kinds of posting tools, no matter what backend you use for your site.\n (Hilarious?) bonus: that content would be available peer-to-peer over IPFS.\n \n Improve the IndieWeb Web Ring (\ud83d\udd78\ud83d\udc8d.ws) to automatically check whether members' sites link back using Webmention. (I managed to make a small but often-asked-for update to the site during IWC)\n \n Improve how my website handles all these check-in posts which are made when someone else checks me in on Swarm. I would like to show who checked me in, at least, if not some of their photos, or maybe even an embedded version of the post from their site.\n \n\n Keep doing the This Week in the IndieWeb podcast! I had been feeling some burnout about this and falling behind. It was so great to talk with folks who listen to it and rely on it to keep up to date with the goings-on in the community!\n Offer a hand with aaronpk's new social monster catching game, built on IndieWeb building blocks.\n Offer a hand with jgmac1106's idea to issue educational course achievements (badges) via IndieWeb building blocks.\n Work on closing down Camura, a photo-sharing social network I helped build during the awkward age after the first \"camera phones\" and before Facebook introduced \"Mobile Uploads\". It has over 100k photos and 50k comments from around 400 folks. I'd like to let it down gently, make sure people have access to those photos, and maybe even preserve some of the best moments of human connection in a public place.\n\n More generally: I think there's a really cool future where IndieWeb building blocks are available on free services like Glitch and Neocities. New folks should be able to register a domain and plug them together in an afternoon, with no coding, and get a website that supports posting all kinds of content and social interactions. All for the cost of a domain! And all with the ability to download their content and take it with them if these services change or they outgrow them. I already built some of this as a goof. The big challenges are simplifying the UX and documenting all of the steps to show folks what they will get and how to get it.\n \n\nOther fun / ridiculous ideas discussed over the weekend:\n\n Support Facebook-style colored-background posts like aaronpk did at IWC. I love the simplicity of adding an RGB color as a hashtag.\n \n\n \n \"This American Bachelor\" (working title only) - a dating site as a podcast. Each episode (or season??) is an NPR-style deep dive into the life and longings of a single person looking for love. Alternate title: \"Single\". The cocktail-driven discussion that produced this idea was a joy.\n \n\n\n I am sure there are fun ideas that were discussed that I am leaving out. If you can think of any, let me know!", "html": "<p>\n I attended <a href=\"https://indieweb.org/2018/NYC\">IndieWebCamp NYC 2018</a> and it was a blast! Check the <a href=\"https://indieweb.org/2018/NYC#Schedule\">schedule</a> for links to notes and videos from the awesome keynotes, discussion sessions, and build-day demos. I am so grateful to all the other <a href=\"https://indieweb.org/2018/NYC#Organizers\">organizers</a>, to all the new and familiar faces that came out, to those that joined us remotely, to <a href=\"https://www.pace.edu/seidenberg/\">Pace University's Seidenberg School</a> for hosting us, and of course to the <a href=\"https://indieweb.org/2018/NYC#Sponsors\">sponsors</a> that made it all possible.\n <br /></p>\n<p>I have a lot of thoughts about all the discussions and projects that were talked about, I'm sure. But for now, I'd like to capture some of the TODOs and project ideas that I came away with after the event, and the post-event discussions over food and drink.</p>\n<ul><li>\n A <a href=\"https://indieweb.org/media_endpoint\">Micropub Media Endpoint</a> built on <a href=\"https://neocities.org/\">Neocities</a> for storage and <a href=\"https://glitch.com/\">Glitch</a> for handling uploads and metadata. It would allow folks to store 1GB of media files like photos, audio, and video for their websites, for free. It would be usable with all kinds of <a href=\"https://indieweb.org/Micropub/Clients\">posting tools</a>, no matter what <a href=\"https://indieweb.org/Micropub/Servers\">backend</a> you use for your site.\n <ul><li>(Hilarious?) bonus: that content would be available <a href=\"https://blog.neocities.org/blog/2015/09/08/its-time-for-the-distributed-web.html\">peer-to-peer over IPFS</a>.</li>\n </ul></li>\n <li>Improve the <a href=\"https://indieweb.org/indiewebring\">IndieWeb Web Ring</a> (<a href=\"https://xn--sr8hvo.ws/\">\ud83d\udd78\ud83d\udc8d.ws</a>) to automatically check whether members' sites link back using <a href=\"https://indieweb.org/Webmention\">Webmention</a>. (I managed to make a <a href=\"https://martymcgui.re/2018/09/29/114553/\">small but often-asked-for update</a> to the site during IWC)</li>\n <li>\n Improve how my website handles all <a href=\"https://martymcgui.re/2018/09/29/160439/\">these</a> <a href=\"https://martymcgui.re/2018/09/29/161113/\">check-in</a> <a href=\"https://martymcgui.re/2018/09/29/182249/\">posts</a> which are made when someone else checks me in on <a href=\"https://www.swarmapp.com/\">Swarm</a>. I would like to show who checked me in, at least, if not some of their photos, or maybe even an embedded version of the post from their site.\n <br /></li>\n <li>Keep doing the <a href=\"https://martymcgui.re/podcasts/indieweb/\">This Week in the IndieWeb podcast</a>! I had been feeling some burnout about this and falling behind. It was so great to talk with folks who listen to it and rely on it to keep up to date with the goings-on in the community!</li>\n <li>Offer a hand with <a href=\"https://aaronparecki.com/\">aaronpk's</a> new <a href=\"https://monstr.space/\">social monster catching game</a>, built on IndieWeb building blocks.</li>\n <li>Offer a hand with <a href=\"http://jgregorymcverry.com/\">jgmac1106's</a> <a href=\"http://jgregorymcverry.com/my-goals-for-indiewebcamp-nyc-openbadges-endorsement-at-the-dns-level/\">idea</a> to issue educational course achievements (<a href=\"http://jgregorymcverry.com/webmention-badges-discussion-across-networks-after-indiewebcamp-nyc-session/\">badges</a>) via IndieWeb building blocks.</li>\n <li>Work on closing down <a href=\"https://camura.com/\">Camura</a>, a photo-sharing social network I helped build during the awkward age after the first \"camera phones\" and before Facebook introduced \"Mobile Uploads\". It has over 100k photos and 50k comments from around 400 folks. I'd like to let it down gently, make sure people have access to those photos, and maybe even preserve some of the best moments of human connection in a public place.</li>\n</ul><p>\n More generally: I think there's a <i>really cool</i> future where <a href=\"https://indieweb.org/Category:building-blocks\">IndieWeb building blocks</a> are available on free services like Glitch and Neocities. New folks should be able to register a domain and plug them together in an afternoon, with no coding, and get a website that supports posting all kinds of content and social interactions. All for the cost of a domain! And all with the ability to download their content and take it with them if these services change or they outgrow them. I <a href=\"https://martymcgui.re/2018/03/12/130455/\">already built some of this</a> as a goof. The big challenges are simplifying the UX and documenting all of the steps to show folks what they will get and how to get it.\n <br /></p>\n<p>Other fun / ridiculous ideas discussed over the weekend:</p>\n<ul><li>\n Support Facebook-style colored-background posts <a href=\"https://martymcgui.re/2018/09/30/113226/\">like aaronpk did at IWC</a>. I love the simplicity of adding an RGB color as a hashtag.\n <br /></li>\n <li>\n \"This American Bachelor\" (working title only) - a dating site as a podcast. Each episode (or season??) is an NPR-style deep dive into the life and longings of a single person looking for love. Alternate title: \"Single\". The cocktail-driven discussion that produced this idea was a joy.\n <br /></li>\n</ul><p>\n I am sure there are fun ideas that were discussed that I am leaving out. If you can think of any, let me know!\n <br /></p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "article", "_id": "1121001", "_source": "175", "_is_read": true }
Definitely! It’s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.
{ "type": "entry", "published": "2018-10-01T22:57:55-04:00", "summary": "Definitely! It\u2019s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.", "url": "https://eddiehinkle.com/2018/10/01/26/reply/", "in-reply-to": [ "https://jj.isgeek.net/2018/10/02-123939-am/" ], "content": { "text": "Definitely! It\u2019s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.", "html": "<p>Definitely! It\u2019s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.</p>" }, "author": { "type": "card", "name": "Eddie Hinkle", "url": "https://eddiehinkle.com/", "photo": "https://aperture-proxy.p3k.io/cc9591b69c2c835fa2c6e23745b224db4b4b431f/68747470733a2f2f656464696568696e6b6c652e636f6d2f696d616765732f70726f66696c652e6a7067" }, "post-type": "reply", "refs": { "https://jj.isgeek.net/2018/10/02-123939-am/": { "type": "entry", "url": "https://jj.isgeek.net/2018/10/02-123939-am/", "name": "https://jj.isgeek.net/2018/10/02-123939-am/", "post-type": "article" } }, "_id": "1118300", "_source": "226", "_is_read": true }
{ "type": "entry", "published": "2018-10-01 18:12-0700", "url": "http://tantek.com/2018/274/t3/undo-indiewebcamp-open-design", "category": [ "Undo" ], "content": { "text": "This past Friday I led a session on #Undo @IndieWebCamp NYC.\n\nI\u2019ve wanted Undo in my posting UI (like Gmail undo send) since I started @Falcon in 2009. Decided it\u2019s time to open up all my design thinking.\nSession: https://indieweb.org/2018/NYC/undo\nDesign: https://indieweb.org/undo\n\nSketches and more to follow. Open sourcing my undo design work because I want to help enable it everywhere. I have a theory that \"Undo\" in posting UIs may help improve online conversation dynamics.", "html": "This past Friday I led a session on #<span class=\"p-category\">Undo</span> <a class=\"h-cassis-username\" href=\"https://twitter.com/IndieWebCamp\">@IndieWebCamp</a> NYC.<br /><br />I\u2019ve wanted Undo in my posting UI (like Gmail undo send) since I started <a class=\"h-cassis-username\" href=\"https://twitter.com/Falcon\">@Falcon</a> in 2009. Decided it\u2019s time to open up all my design thinking.<br />Session: <a href=\"https://indieweb.org/2018/NYC/undo\">https://indieweb.org/2018/NYC/undo</a><br />Design: <a href=\"https://indieweb.org/undo\">https://indieweb.org/undo</a><br /><br />Sketches and more to follow. Open sourcing my undo design work because I want to help enable it everywhere. I have a theory that \"Undo\" in posting UIs may help improve online conversation dynamics." }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "note", "_id": "1116282", "_source": "1", "_is_read": true }
{ "type": "entry", "published": "2018-10-01 17:37-0700", "url": "http://tantek.com/2018/274/t2/vcard4-hcard-most-interop", "category": [ "PortableContacts", "vCard4", "hcard" ], "in-reply-to": [ "https://twitter.com/chrismessina/status/1046569740892688384" ], "content": { "text": "@chrismessina FOAF was unnecessary reinvention of vCard, still is.\n#PortableContacts bad news, now zombie site https://indieweb.org/Portable_Contacts\nXFN still here, mostly rel=me; Mastodon added support.\n#vCard4 #hcard have most interop across devices apps sites: http://microformats.org/wiki/h-card", "html": "<a class=\"h-cassis-username\" href=\"https://twitter.com/chrismessina\">@chrismessina</a> FOAF was unnecessary reinvention of vCard, still is.<br />#<span class=\"p-category\">PortableContacts</span> bad news, now zombie site <a href=\"https://indieweb.org/Portable_Contacts\">https://indieweb.org/Portable_Contacts</a><br />XFN still here, mostly rel=me; Mastodon added support.<br />#<span class=\"p-category\">vCard4</span> #<span class=\"p-category\">hcard</span> have most interop across devices apps sites: <a href=\"http://microformats.org/wiki/h-card\">http://microformats.org/wiki/h-card</a>" }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "reply", "refs": { "https://twitter.com/chrismessina/status/1046569740892688384": { "type": "entry", "url": "https://twitter.com/chrismessina/status/1046569740892688384", "name": "@chrismessina\u2019s tweet", "post-type": "article" } }, "_id": "1116283", "_source": "1", "_is_read": true }
{ "type": "entry", "published": "2018-10-01 15:47-0700", "url": "http://tantek.com/2018/274/t1/indiewebcamp-nyc-photos-notes-posted", "category": [ "undo", "readers", "notifications", "learntobuild", "dataportability", "buildingblocks", "badges", "activitypub:" ], "content": { "text": "Good times @IndieWebCamp NYC! Huge thanks to host @PaceUniversity & organizers @jgmac1106 @schmarty @dshanske!\nPhotos etc: https://indieweb.org/2018/NYC\nSession notes posted: #undo #readers #notifications #learntobuild #dataportability #buildingblocks #badges #activitypub: https://indieweb.org/2018/NYC/Sessions", "html": "Good times <a class=\"h-cassis-username\" href=\"https://twitter.com/IndieWebCamp\">@IndieWebCamp</a> NYC! Huge thanks to host <a class=\"h-cassis-username\" href=\"https://twitter.com/PaceUniversity\">@PaceUniversity</a> & organizers <a class=\"h-cassis-username\" href=\"https://twitter.com/jgmac1106\">@jgmac1106</a> <a class=\"h-cassis-username\" href=\"https://twitter.com/schmarty\">@schmarty</a> <a class=\"h-cassis-username\" href=\"https://twitter.com/dshanske\">@dshanske</a>!<br />Photos etc: <a href=\"https://indieweb.org/2018/NYC\">https://indieweb.org/2018/NYC</a><br />Session notes posted: #<span class=\"p-category\">undo</span> #<span class=\"p-category\">readers</span> #<span class=\"p-category\">notifications</span> #<span class=\"p-category\">learntobuild</span> #<span class=\"p-category\">dataportability</span> #<span class=\"p-category\">buildingblocks</span> #<span class=\"p-category\">badges</span> #<span class=\"p-category\">activitypub:</span> <a href=\"https://indieweb.org/2018/NYC/Sessions\">https://indieweb.org/2018/NYC/Sessions</a>" }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "note", "_id": "1116284", "_source": "1", "_is_read": true }
Homebrew Website Club is this Wednesday, 6:30pm at Mozart’s Coffee. If the weather’s nice we’ll meet outside. I’m catching up on videos from IndieWebCamp NYC so I can summarize that event for the Austin group.
{ "type": "entry", "author": { "name": null, "url": "https://www.manton.org/", "photo": null }, "url": "https://www.manton.org/2018/10/01/185056.html", "content": { "html": "<p>Homebrew Website Club is this Wednesday, 6:30pm at Mozart\u2019s Coffee. If the weather\u2019s nice we\u2019ll meet outside. I\u2019m catching up on videos from IndieWebCamp NYC so I can summarize that event for the Austin group.</p>", "text": "Homebrew Website Club is this Wednesday, 6:30pm at Mozart\u2019s Coffee. If the weather\u2019s nice we\u2019ll meet outside. I\u2019m catching up on videos from IndieWebCamp NYC so I can summarize that event for the Austin group." }, "published": "2018-10-01T13:50:56-05:00", "post-type": "note", "_id": "1115796", "_source": "12", "_is_read": true }
{ "type": "entry", "published": "2018-10-01T19:44:52-04:00", "url": "https://martymcgui.re/2018/10/01/194452/", "category": [ "podcast", "IndieWeb" ], "audio": [ "https://aperture-proxy.p3k.io/0bf739ddb7a5eb104a3facffd82adac0a1b9ce5c/68747470733a2f2f6d656469612e6d617274796d636775692e72652f66342f61362f66332f36352f38383637653637386537313566343736306433323734323239303336303630393333373664636533303461333963303834633730363664612e6d7033" ], "syndication": [ "https://huffduffer.com/schmarty/504740", "https://twitter.com/schmarty/status/1046909525926842368", "https://www.facebook.com/marty.mcguire.54/posts/10212971376261096" ], "name": "This Week in the IndieWeb Audio Edition \u2022 September 15th - 21st, 2018", "content": { "text": "Show/Hide Transcript \n \n Another late one but a great one. Mastodon adds rel-me, geocaching with WordPress, and Path ends their incredible journey. It\u2019s the audio edition for This Week in the IndieWeb for September 15th - 21st, 2018.\n\nYou can find all of my audio editions and subscribe with your favorite podcast app here: martymcgui.re/podcasts/indieweb/.\n\nMusic from Aaron Parecki\u2019s 100DaysOfMusic project: Day 85 - Suit, Day 48 - Glitch, Day 49 - Floating, Day 9, and Day 11\n\nThanks to everyone in the IndieWeb chat for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!", "html": "Show/Hide Transcript \n \n <p>Another late one but a great one. Mastodon adds rel-me, geocaching with WordPress, and Path ends their incredible journey. It\u2019s the audio edition for <a href=\"https://indieweb.org/this-week/2018-09-21.html\">This Week in the IndieWeb for September 15th - 21st, 2018</a>.</p>\n\n<p>You can find all of my audio editions and subscribe with your favorite podcast app here: <a href=\"https://martymcgui.re/podcasts/indieweb/\">martymcgui.re/podcasts/indieweb/</a>.</p>\n\n<p>Music from <a href=\"https://aaronparecki.com/\">Aaron Parecki</a>\u2019s <a href=\"https://100.aaronparecki.com/\">100DaysOfMusic project</a>: <a href=\"https://aaronparecki.com/2017/03/15/14/day85\">Day 85 - Suit</a>, <a href=\"https://aaronparecki.com/2017/02/06/7/day48\">Day 48 - Glitch</a>, <a href=\"https://aaronparecki.com/2017/02/07/4/day49\">Day 49 - Floating</a>, <a href=\"https://aaronparecki.com/2016/12/29/21/day-9\">Day 9</a>, and <a href=\"https://aaronparecki.com/2016/12/31/15/\">Day 11</a></p>\n\n<p>Thanks to everyone in the <a href=\"https://chat.indieweb.org/\">IndieWeb chat</a> for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "audio", "_id": "1115634", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-10-01T17:28:24-04:00", "url": "https://martymcgui.re/2018/10/01/172824/", "category": [ "podcast", "IndieWeb", "this-week-indieweb-podcast" ], "audio": [ "https://aperture-proxy.p3k.io/ce01ca4ce7c36e202a19e36cb6198df661841769/68747470733a2f2f6d656469612e6d617274796d636775692e72652f39642f33362f33622f36382f65393332356361623430383137333737663438663163353932393733353565636435643732616661613133393431333038393233386166342e6d7033" ], "syndication": [ "https://huffduffer.com/schmarty/504728", "https://twitter.com/schmarty/status/1046875366869147648", "https://www.facebook.com/marty.mcguire.54/posts/10212970840327698" ], "name": "This Week in the IndieWeb Audio Edition \u2022 September 8th - 14th, 2018", "content": { "text": "Show/Hide Transcript \n \n Two weeks late but better than never! Pronoun buttons, a class on IndieWeb, and a Google takeover of the web. It\u2019s the audio edition for This Week in the IndieWeb for September 8th - 14th, 2018.\n\nYou can find all of my audio editions and subscribe with your favorite podcast app here: martymcgui.re/podcasts/indieweb/.\n\nMusic from Aaron Parecki\u2019s 100DaysOfMusic project: Day 85 - Suit, Day 48 - Glitch, Day 49 - Floating, Day 9, and Day 11\n\nThanks to everyone in the IndieWeb chat for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!", "html": "Show/Hide Transcript \n \n <p>Two weeks late but better than never! Pronoun buttons, a class on IndieWeb, and a Google takeover of the web. It\u2019s the audio edition for <a href=\"https://indieweb.org/this-week/2018-09-14.html\">This Week in the IndieWeb for September 8th - 14th, 2018</a>.</p>\n\n<p>You can find all of my audio editions and subscribe with your favorite podcast app here: <a href=\"https://martymcgui.re/podcasts/indieweb/\">martymcgui.re/podcasts/indieweb/</a>.</p>\n\n<p>Music from <a href=\"https://aaronparecki.com/\">Aaron Parecki</a>\u2019s <a href=\"https://100.aaronparecki.com/\">100DaysOfMusic project</a>: <a href=\"https://aaronparecki.com/2017/03/15/14/day85\">Day 85 - Suit</a>, <a href=\"https://aaronparecki.com/2017/02/06/7/day48\">Day 48 - Glitch</a>, <a href=\"https://aaronparecki.com/2017/02/07/4/day49\">Day 49 - Floating</a>, <a href=\"https://aaronparecki.com/2016/12/29/21/day-9\">Day 9</a>, and <a href=\"https://aaronparecki.com/2016/12/31/15/\">Day 11</a></p>\n\n<p>Thanks to everyone in the <a href=\"https://chat.indieweb.org/\">IndieWeb chat</a> for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "audio", "_id": "1114950", "_source": "175", "_is_read": true }
My first IndieWebCamp outside the U.S.! Very excited to meet some new faces.
… and to be making my first trip to Germany!
{ "type": "entry", "published": "2018-09-30T17:15:24-04:00", "rsvp": "yes", "url": "https://martymcgui.re/2018/09/30/171524/", "category": [ "IndieWeb", "IWC", "Berlin", "IWCBerlin" ], "in-reply-to": [ "https://indieweb.org/2018/Berlin" ], "content": { "text": "I'm going!My first IndieWebCamp outside the U.S.! Very excited to meet some new faces.\n\u2026 and to be making my first trip to Germany!", "html": "I'm going!<p>My first IndieWebCamp outside the U.S.! Very excited to meet some new faces.</p>\n<p>\u2026 and to be making my first trip to Germany!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "rsvp", "refs": { "https://indieweb.org/2018/Berlin": { "type": "entry", "summary": "", "url": "https://indieweb.org/2018/Berlin", "name": "IndieWebCamp Berlin 2018", "author": { "type": "card", "name": "indieweb.org", "url": "http://indieweb.org", "photo": null }, "post-type": "article" } }, "_id": "1108626", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-09-30T15:46:56+10:00", "url": "https://unicyclic.com/mal/2018-09-30-A_Programmable_IndieWeb", "category": [ "IndieWeb" ], "name": "A Programmable IndieWeb", "content": { "text": "It's been a long time since we've had any new writing from Aaron Swartz, but a draft of a book has been released that he had worked on, called A Programmable Web.\n\n\nThere's a fair bit of sadness and nostalgia in reading this work, as Aaron had a characteristic writing style that many of us still miss. However it's also a fascinating read, as he introduces concepts of working with the web, building one idea on top of the next.\n\n\nIt also feels a bit like a snapshot of the time when he was writing. I wonder if he would still favor the particular technologies and development styles he writes about? Regardless of his personal development choices, I still think he would have seen the IndieWeb as having the hacker spirit he identified with. In fact, I think this book paints an alternate vision for what we would like to see the IndieWeb achieve.\n\n\nThe final page of the draft reads, \"the Semantic Web is based on a bet, a bet that giving the world tools to easily collaborate and communicate will lead to possibilities so wonderful we can scarcely even imagine them right now. Sure, it sounds a little bit crazy. But it paid off the last time they made that gamble: we ended up with a little thing called the World Wide Web. Let's see if they can do that again.\"\n\n\nIt's a beautiful picture, but I wish Aaron had written we, instead of they, here. He wasn't the sort of guy who waited for others to get things done when it was within his own abilities. The Programmable Web will be built, and the tools for collaboration are being built using the process of collaboration itself.", "html": "It's been a long time since we've had any new writing from <a href=\"http://www.aaronsw.com/\">Aaron Swartz</a>, but a draft of a book has been released that he had worked on, called <a href=\"https://archive.org/details/AaronSwartzAProgrammableWeb/page/n0\">A Programmable Web</a>.<br /><br />\nThere's a fair bit of sadness and nostalgia in reading this work, as Aaron had a characteristic writing style that many of us still miss. However it's also a fascinating read, as he introduces concepts of working with the web, building one idea on top of the next.<br /><br />\nIt also feels a bit like a snapshot of the time when he was writing. I wonder if he would still favor the particular technologies and development styles he writes about? Regardless of his personal development choices, I still think he would have seen the <a href=\"https://indieweb.org/\">IndieWeb</a> as having the hacker spirit he identified with. In fact, I think this book paints an alternate vision for what we would like to see the IndieWeb achieve.<br /><br />\nThe final page of the draft reads, <em>\"the Semantic Web is based on a bet, a bet that giving the world tools to easily collaborate and communicate will lead to possibilities so wonderful we can scarcely even imagine them right now. Sure, it sounds a little bit crazy. But it paid off the last time they made that gamble: we ended up with a little thing called the World Wide Web. Let's see if they can do that again.\"</em><br /><br />\nIt's a beautiful picture, but I wish Aaron had written <strong>we</strong>, instead of <strong>they</strong>, here. He wasn't the sort of guy who waited for others to get things done when it was within his own abilities. The Programmable Web will be built, and the tools for collaboration are being built using the process of collaboration itself." }, "author": { "type": "card", "name": "Malcolm Blaney", "url": "https://unicyclic.com/mal", "photo": "https://aperture-proxy.p3k.io/4f46272c0027449ced0d7cf8de31ea1bec37210e/68747470733a2f2f756e696379636c69632e636f6d2f6d616c2f7075626c69632f70726f66696c655f736d616c6c5f7468756d622e706e67" }, "post-type": "article", "_id": "1106025", "_source": "243", "_is_read": true }
{ "type": "entry", "published": "2018-09-29T15:01:40-04:00", "url": "https://aaronparecki.com/2018/09/29/13/", "category": [ "ff4411" ], "content": { "text": "This is a demo for IndieWebCamp NYC! If you look at this post on my website, it will have a colored background!" }, "author": { "type": "card", "name": "Aaron Parecki", "url": "https://aaronparecki.com/", "photo": "https://aperture-media.p3k.io/aaronparecki.com/2b8e1668dcd9cfa6a170b3724df740695f73a15c2a825962fd0a0967ec11ecdc.jpg" }, "post-type": "note", "_id": "1103099", "_source": "16", "_is_read": true }
Are you a member of the 🕸💍 IndieWeb Webring? Today I made an update!
All members of the webring get a unique emoji ID when they first sign in. Previously, those emoji might have included the flag of a country or state, and not everyone wants to be associated with a random country or state!
From now on, new emoji IDs will not include country flags.
If you’re a member of the webring already, and would like a new emoji ID, feel free to drop me a line in the #indieweb chat (I’m schmarty
there). I’ll reset your account and you’ll get a new emoji ID. You’ll also have to update the webring links code on your page to make sure they point to your new ID!
{ "type": "entry", "published": "2018-09-29T11:45:53-04:00", "url": "https://martymcgui.re/2018/09/29/114553/", "category": [ "\ud83d\udd78\ud83d\udc8d", "webring", "indieweb", "update" ], "content": { "text": "Are you a member of the \ud83d\udd78\ud83d\udc8d IndieWeb Webring? Today I made an update!\nAll members of the webring get a unique emoji ID when they first sign in. Previously, those emoji might have included the flag of a country or state, and not everyone wants to be associated with a random country or state!\nFrom now on, new emoji IDs will not include country flags.\nIf you\u2019re a member of the webring already, and would like a new emoji ID, feel free to drop me a line in the #indieweb chat (I\u2019m schmarty there). I\u2019ll reset your account and you\u2019ll get a new emoji ID. You\u2019ll also have to update the webring links code on your page to make sure they point to your new ID!", "html": "<p>Are you a member of the <a href=\"https://xn--sr8hvo.ws\">\ud83d\udd78\ud83d\udc8d</a> IndieWeb Webring? Today I made an update!</p>\n<p>All members of the webring get a unique emoji ID when they first sign in. Previously, those emoji might have included the flag of a country or state, and not everyone wants to be associated with a random country or state!</p>\n<p>From now on, new emoji IDs will not include country flags.</p>\n<p>If you\u2019re a member of the webring already, and would like a new emoji ID, feel free to drop me a line in the <a href=\"https://chat.indieweb.org/\">#indieweb chat</a> (I\u2019m <code>schmarty</code> there). I\u2019ll reset your account and you\u2019ll get a new emoji ID. You\u2019ll also have to update the webring links code on your page to make sure they point to your new ID!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "note", "_id": "1102129", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-09-28T15:12:33-04:00", "url": "https://martymcgui.re/2018/09/28/151233/", "photo": [ "https://martymcgui.re/imageproxy/960,fit,sCf_co3HWAIZ4fBP7CHdvkr3u8vCz64k1USJq5CitASc=/https://media.martymcgui.re/66/d8/28/6f/71f30adaed0f3c6caf935cc3822e799a0bf375e53d74462aff6f3ff5.jpg" ], "name": "Notes for event attendance tracker using webmentions from the #badges session at IWC NYC.", "content": { "text": "Notes for event attendance tracker using webmentions from the #badges session at IWC NYC.", "html": "<a href=\"https://media.martymcgui.re/66/d8/28/6f/71f30adaed0f3c6caf935cc3822e799a0bf375e53d74462aff6f3ff5.jpg\"></a> \n \n \n\n \n <p>Notes for event attendance tracker using webmentions from the #badges session at IWC NYC.</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "photo", "_id": "1098462", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-09-28T14:53:52-04:00", "url": "https://david.shanske.com/2018/09/28/thoughts-about-assertion-workflows/", "name": "Thoughts About Assertion Workflows", "content": { "text": "This is a preliminary technical workflow proposal for assertions, which would be needed for badges, endorsements, and other ideas. It is based on thoughts that I had listening to the badges session at Indiewebcamp NYC 2018.\nScenario 1: Individual creates criteria and wants to assert that other individual has achieved said criteria. Example: Professor wants to certify students completed coursework.\nProfessor Posts Criteria for Each Achievement as a unique page (A).\nStudent completes assignment as a post (B).\nProfessor Posts Badge/Assertion/Endorsement post on their website as an h-review, with a p-item property to student\u2019s URL (B). Would need a new or existing property to represent the relationship to the original assertion (A). Suggest u-assert and u-assert-of?\nScenario 2: Individual creates assertion post and solicits others to endorse that statement as factual.\nIndividual makes a post to their site(h-resume for references on a resume, not sure what to request endorsement of a statement? p-assert with a nested h-item?) and invites other individuals(using existing invitee property used for RSVPs?) to endorse or assert it. Criteria might be included for achievement.\nOthers create \u2018assertion\u2019 posts on their site(assert-of) and send webmentions, which would cause the post to be updated to note that it had been achieved.\nExisting microformats for h-resume and h-review seem to allow additional context.\nEducation\nExperience\nSkill\nRating\nBest\nWorst\n\u00a0", "html": "This is a preliminary technical workflow proposal for assertions, which would be needed for badges, endorsements, and other ideas. It is based on thoughts that I had listening to the badges session at Indiewebcamp NYC 2018.\n<p>Scenario 1: Individual creates criteria and wants to assert that other individual has achieved said criteria. Example: Professor wants to certify students completed coursework.</p>\n<ul><li>Professor Posts Criteria for Each Achievement as a unique page (A).</li>\n<li>Student completes assignment as a post (B).</li>\n<li>Professor Posts Badge/Assertion/Endorsement post on their website as an h-review, with a p-item property to student\u2019s URL (B). Would need a new or existing property to represent the relationship to the original assertion (A). Suggest u-assert and u-assert-of?</li>\n</ul><p>Scenario 2: Individual creates assertion post and solicits others to endorse that statement as factual.</p>\n<ul><li>Individual makes a post to their site(h-resume for references on a resume, not sure what to request endorsement of a statement? p-assert with a nested h-item?) and invites other individuals(using existing invitee property used for RSVPs?) to endorse or assert it. Criteria might be included for achievement.</li>\n<li>Others create \u2018assertion\u2019 posts on their site(assert-of) and send webmentions, which would cause the post to be updated to note that it had been achieved.</li>\n</ul><p>Existing microformats for h-resume and h-review seem to allow additional context.</p>\n<ul><li>Education</li>\n<li>Experience</li>\n<li>Skill</li>\n<li>Rating</li>\n<li>Best</li>\n<li>Worst</li>\n</ul><p>\u00a0</p>" }, "author": { "type": "card", "name": "David Shanske", "url": "https://david.shanske.com/", "photo": "https://david.shanske.com/wp-content/uploads/avatar-privacy/cache/gravatar/2/c/2cb1f8afd9c8d3b646b4071c5ed887c970d81d625eeed87e447706940e2c403d-125.png" }, "post-type": "article", "_id": "1097936", "_source": "5", "_is_read": true }