This is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:
Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things… have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That’s it: back to single-serving websites for single-serving use cases.
A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it’s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.
{ "type": "entry", "published": "2018-10-04T21:43:28Z", "url": "https://adactio.com/links/14399", "category": [ "bots", "twitter", "apis", "hci", "human", "computer", "interaction", "boxes", "ai", "machines", "indieweb" ], "bookmark-of": [ "https://infovore.org/archives/2018/10/02/pouring-one-out-for-the-boxmakers/" ], "content": { "text": "Infovore \u00bb Pouring one out for the Boxmakers\n\n\n\nThis is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:\n\n\n Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things\u2026 have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That\u2019s it: back to single-serving websites for single-serving use cases.\n \n A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it\u2019s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.", "html": "<h3>\n<a class=\"p-name u-bookmark-of\" href=\"https://infovore.org/archives/2018/10/02/pouring-one-out-for-the-boxmakers/\">\nInfovore \u00bb Pouring one out for the Boxmakers\n</a>\n</h3>\n\n<p>This is a rather beautiful piece of writing by Tom (especially the William Gibson bit at the end). This got me right in the feels:</p>\n\n<blockquote>\n <p>Web 2.0 really, truly, is over. The public APIs, feeds to be consumed in a platform of your choice, services that had value beyond their own walls, mashups that merged content and services into new things\u2026 have all been replaced with heavyweight websites to ensure a consistent, single experience, no out-of-context content, and maximising the views of advertising. That\u2019s it: back to single-serving websites for single-serving use cases.</p>\n \n <p>A shame. A thing I had always loved about the internet was its juxtapositions, the way it supported so many use-cases all at once. At its heart, a fundamental one: it was a medium which you could both read and write to. From that flow others: it\u2019s not only work and play that coexisted on it, but the real and the fictional; the useful and the useless; the human and the machine.</p>\n</blockquote>" }, "post-type": "bookmark", "_id": "1135347", "_source": "2", "_is_read": true }
{ "type": "entry", "published": "2018-10-03 18:38-0700", "url": "http://tantek.com/2018/276/t1/implemented-delay-atom-feed-undo-strategy", "content": { "text": "At Homebrew Website Club @MozSF, just implemented a 10 minute delay on my Atom feed, as part of implementing an undo strategy per https://indieweb.org/undo.\n\nLet\u2019s see if it works (with this post).", "html": "At Homebrew Website Club <a class=\"h-cassis-username\" href=\"https://twitter.com/MozSF\">@MozSF</a>, just implemented a 10 minute delay on my Atom feed, as part of implementing an undo strategy per <a href=\"https://indieweb.org/undo\">https://indieweb.org/undo</a>.<br /><br />Let\u2019s see if it works (with this post)." }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "note", "_id": "1129510", "_source": "1", "_is_read": true }
{ "type": "entry", "published": "2018-10-03T20:28:05-04:00", "url": "https://martymcgui.re/2018/10/03/202805/", "featured": "https://media.martymcgui.re/f6/d0/b8/e8/c68ff2255c35600a450b0229039356d4222fcc80359e3078601da431.jpg", "category": [ "HWC", "IndieWeb", "Baltimore", "wrap-up" ], "syndication": [ "https://twitter.com/schmarty/status/1047645195888553984", "https://www.facebook.com/events/2227947090772675/permalink/2234654760101908/" ], "name": "HWC Baltimore 2018-10-03 Wrap-Up", "content": { "text": "Baltimore's first Homebrew Website Club of October met at the Digital Harbor Foundation Tech Center on October 3rd.\nHere are some notes from the \"broadcast\" portion of the meetup:\n\n jonathanprozzi.net \u2014 No updates since last time for his personal site. Was burned out after a lot of frustration with Gatsby + WordPress headless. Got to the point of feeling helpless and like he couldn't figure out how to progress. Started porting to Next.js and has some renewed energy because he is making progress and enjoying it.\n \n\n\n martymcgui.re \u2014 Went to IndieWebCamp NYC! Didn't do anything new for his site, but did manage to write a post full of project ideas that he came away with. Really interested in using free services like Glitch (for server-side processing), Neocities (static file hosting), and Cloudinary (make thumbnail images from those giant originals) to make IndieWeb building blocks that people without coding experience could combine to make their own sites. Currently thinking of building a Micropub Media Endpoint that handles dynamic image resizing to see how this approach works.\n \n\nOther discussion:\nIdeas from the IndieWebCamp NYC Organizer's meeting about \"messaging\" for Homebrew Website Clubs. The name is often confusing or off-putting for newcomers, and oddly self-selecting for those that \"get it\". Thinking about rebranding to \"Indie Web Meetup\" \u2013 evokes \"independent web\" and a meetup is more inviting than a \"club\" which might have membership requirements.\n Talked a lot about how many folks who come to HWC are WordPress users and how we might do better by them if we made it more clear that we are here to help them power up their WordPress sites.\n Or even go a step further and simply offer to get people started with or improve their existing personal site. If we can get people to the meetups, no matter the skill level, we can get them started: sign-up for micro.blog to get going right away, put a simple static HTML page up on glitch or GitHub, dive into WordPress, etc.\n First steps may be to think of it more like an open hours help desk. The \"price\" of getting help can be to write up all the steps followed, as a way of documenting steps for future folks who might need it. The goal would be to have folks come and actually leave with something accomplished.\n \n Also talked about difficulties with estimating the limitations of tools before you invest a bunch of time in learning to use it and understand it deeply. Jonathan's experience with Gatsby was a prime example \u2013 ultimately he needed it to do something it didn't support well, but it took a lot of frustration to find out that the issue was with the tool being a bad fit for the problem.\n \n\nLeft-to-right: martymcgui.re, jonathanprozzi.net\n This ended up being a very organizer-y meta meeting. It was nice to be able to check in with ourselves about this meetup and its future. We are excited to continue to evolve! We look forward to seeing you at our next meetup on Tuesday, October 16th at 7:30pm!", "html": "<p>Baltimore's <a href=\"https://indieweb.org/events/2018-10-03-homebrew-website-club\">first Homebrew Website Club of October</a> met at the <a href=\"https://www.digitalharbor.org/\">Digital Harbor Foundation Tech Center</a> on October 3rd.</p>\n<p>Here are some notes from the \"broadcast\" portion of the meetup:</p>\n<p>\n <a href=\"https://jonathanprozzi.net/\">jonathanprozzi.net</a> \u2014 No updates since last time for his personal site. Was burned out after a lot of frustration with Gatsby + WordPress headless. Got to the point of feeling helpless and like he couldn't figure out how to progress. Started porting to <a href=\"https://zeit.co/blog/next\">Next.js</a> and has some renewed energy because he is making progress and enjoying it.\n <br /></p>\n<p>\n <a href=\"https://martymcgui.re/\">martymcgui.re</a> \u2014 Went to <a href=\"https://indieweb.org/2018/NYC\">IndieWebCamp NYC</a>! Didn't do anything new for his site, but did manage to write a <a href=\"https://martymcgui.re/2018/10/02/155150/\">post full of project ideas</a> that he came away with. Really interested in using free services like <a href=\"https://glitch.com/\">Glitch</a> (for server-side processing), <a href=\"https://neocities.org/\">Neocities</a> (static file hosting), and <a href=\"https://cloudinary.com/\">Cloudinary</a> (make thumbnail images from those giant originals) to make IndieWeb building blocks that people without coding experience could combine to make their own sites. Currently thinking of building a <a href=\"https://indieweb.org/media_endpoint\">Micropub Media Endpoint</a> that handles dynamic image resizing to see how this approach works.\n <br /></p>\n<p>Other discussion:</p>\n<ul><li>Ideas from the IndieWebCamp NYC Organizer's meeting about \"messaging\" for Homebrew Website Clubs. The name is often confusing or off-putting for newcomers, and oddly self-selecting for those that \"get it\". Thinking about rebranding to \"Indie Web Meetup\" \u2013 evokes \"independent web\" and a meetup is more inviting than a \"club\" which might have membership requirements.</li>\n <li>Talked a lot about how many folks who come to HWC are WordPress users and how we might do better by them if we made it more clear that we are here to help them power up their WordPress sites.</li>\n <li>Or even go a step further and simply offer to get people started with or improve their existing personal site. If we can get people to the meetups, no matter the skill level, we can get them started: sign-up for micro.blog to get going right away, put a simple static HTML page up on glitch or GitHub, dive into WordPress, etc.</li>\n <li>First steps may be to think of it more like an open hours help desk. The \"price\" of getting help can be to write up all the steps followed, as a way of documenting steps for future folks who might need it. The goal would be to have folks come and actually leave with something accomplished.</li>\n <li>\n Also talked about difficulties with estimating the limitations of tools before you invest a bunch of time in learning to use it and understand it deeply. Jonathan's experience with Gatsby was a prime example \u2013 ultimately he needed it to do something it didn't support well, but it took a lot of frustration to find out that the issue was with the tool being a bad fit for the problem.\n <br /></li>\n</ul><img class=\"u-featured\" src=\"https://aperture-proxy.p3k.io/549b5a8e566e9fd2aba5e1297e097c82d7f4b969/68747470733a2f2f6d656469612e6d617274796d636775692e72652f66362f64302f62382f65382f63363866663232353563333536303061343530623032323930333933353664343232326663633830333539653330373836303164613433312e6a7067\" alt=\"\" />Left-to-right: martymcgui.re, jonathanprozzi.net<p>\n This ended up being a very organizer-y meta meeting. It was nice to be able to check in with ourselves about this meetup and its future. We are excited to continue to evolve! We look forward to seeing you at our next meetup on <b>Tuesday</b>, October 16th at <b>7:30pm</b>!\n <br /></p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "article", "_id": "1129031", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-10-03T14:11:13-04:00", "url": "https://martymcgui.re/2018/10/03/141113/", "category": [ "podcast", "IndieWeb", "this-week-indieweb-podcast" ], "audio": [ "https://aperture-proxy.p3k.io/d529248b21856d8a40bae127ebeb851f2df148f7/68747470733a2f2f6d656469612e6d617274796d636775692e72652f65652f66632f31612f62632f64313266313162386639346438653337376266356439353236306436626432353736363236633932313633643830303465363531316330312e6d7033" ], "syndication": [ "https://huffduffer.com/schmarty/505078", "https://twitter.com/schmarty/status/1047550387568484353", "https://www.facebook.com/marty.mcguire.54/posts/10212982635582572" ], "name": "This Week in the IndieWeb Audio Edition \u2022 September 22nd - 28th, 2018", "content": { "text": "Aging memes, a case against ActivityPub, and updates from IndieWebCamps! It\u2019s the audio edition for This Week in the IndieWeb for September 22nd - 28th, 2018.\n\nYou can find all of my audio editions and subscribe with your favorite podcast app here: martymcgui.re/podcasts/indieweb/.\n\nMusic from Aaron Parecki\u2019s 100DaysOfMusic project: Day 85 - Suit, Day 48 - Glitch, Day 49 - Floating, Day 9, and Day 11\n\nThanks to everyone in the IndieWeb chat for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!", "html": "<p>Aging memes, a case against ActivityPub, and updates from IndieWebCamps! It\u2019s the audio edition for <a href=\"https://indieweb.org/this-week/2018-09-28.html\">This Week in the IndieWeb for September 22nd - 28th, 2018</a>.</p>\n\n<p>You can find all of my audio editions and subscribe with your favorite podcast app here: <a href=\"https://martymcgui.re/podcasts/indieweb/\">martymcgui.re/podcasts/indieweb/</a>.</p>\n\n<p>Music from <a href=\"https://aaronparecki.com/\">Aaron Parecki</a>\u2019s <a href=\"https://100.aaronparecki.com/\">100DaysOfMusic project</a>: <a href=\"https://aaronparecki.com/2017/03/15/14/day85\">Day 85 - Suit</a>, <a href=\"https://aaronparecki.com/2017/02/06/7/day48\">Day 48 - Glitch</a>, <a href=\"https://aaronparecki.com/2017/02/07/4/day49\">Day 49 - Floating</a>, <a href=\"https://aaronparecki.com/2016/12/29/21/day-9\">Day 9</a>, and <a href=\"https://aaronparecki.com/2016/12/31/15/\">Day 11</a></p>\n\n<p>Thanks to everyone in the <a href=\"https://chat.indieweb.org/\">IndieWeb chat</a> for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "audio", "_id": "1126820", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-10-03T08:32:59-07:00", "url": "https://aaronparecki.com/2018/10/03/11/lobsters", "category": [ "webmention", "indieweb" ], "syndication": [ "https://twitter.com/aaronpk/status/1047509843492265984", "https://news.indieweb.org/en/aaronparecki.com/2018/10/03/11/lobsters" ], "content": { "text": "Well this is exciting, https://lobste.rs now supports sending webmentions! \ud83e\udd90 If someone submits one of your links, now you'll be immediately notified! Congrats on shipping! \ud83c\udf89 \n\nhttps://github.com/lobsters/lobsters/pull/535", "html": "Well this is exciting, <a href=\"https://lobste.rs\"><span>https://</span>lobste.rs</a> now supports sending webmentions! <a href=\"https://aaronparecki.com/emoji/%F0%9F%A6%90\">\ud83e\udd90</a> If someone submits one of your links, now you'll be immediately notified! Congrats on shipping! <a href=\"https://aaronparecki.com/emoji/%F0%9F%8E%89\">\ud83c\udf89</a> <br /><br /><a href=\"https://github.com/lobsters/lobsters/pull/535\"><span>https://</span>github.com/lobsters/lobsters/pull/535</a>" }, "author": { "type": "card", "name": "Aaron Parecki", "url": "https://aaronparecki.com/", "photo": "https://aperture-media.p3k.io/aaronparecki.com/2b8e1668dcd9cfa6a170b3724df740695f73a15c2a825962fd0a0967ec11ecdc.jpg" }, "post-type": "note", "_id": "1125692", "_source": "16", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/making-things-private/", "published": "2017-10-28T15:00:00+00:00", "content": { "html": "<p><strong>Have you ever reached the point when you started questioning why you\u2019re doing something? </strong> <strong>I have, but never before with my website.</strong></p>\n<img src=\"https://aperture-proxy.p3k.io/dd33a8d20f1887b86e18873edb03841546d83318/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6d616b696e672d7468696e67732d707269766174652f776861745f69735f6d795f707572706f73652e676966\" title=\"what_is_my_purpose\" alt=\"\" />\nWhat is my purpose? The unfortunate, sentient robot Rick created for the sole purpose of passing the butter.\n<p>The precursor to petermolnar.net started existing for a very simple reason: I wanted an online home and I wanted to put \u201cinteresting\u201d things on it. It was in 1999, before chronological ordering took over the internet.<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a> Soon it got a blog-ish stream, then a portfolio for my photos, later tech howtos and long journal entries, but one thing was consistent for a very long time: the majority of the content was made by me.</p>\n<p>After encountering the indieweb movement<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a> I started developing the idea of centralising one\u2019s self. I wrote about it not once<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a> but twice<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a>, but going through with importing bookmarks and favourites had an unexpected outcome: they heavily outweighed my original content.</p>\n<p><strong>Do you know what happens when your own website doesn\u2019t have your own content? It starts feeling distant and unfamiliar. When you get here, you either leave the whole thing behind or reboot it somehow. I couldn\u2019t imagine not having a website, so I rebooted.</strong></p>\n<p>I kept long journal entries; notes, for replies to other websites and for short entries; photos; and tech articles - the rest needs to continue it\u2019s life either archived privately or forgotten for good.</p>\n<h2>Outsourcing bookmarks</h2>\n<p>The indieweb wiki entry on <code>bookmark</code> says<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a>:</p>\n<blockquote>\n<p>Why should you post bookmark posts? Good question. People seem to have reasons for doing so. (please feel free to replace this rhetorical question with actual reasoning)</p>\n</blockquote>\n<p>Since that didn\u2019t help, I stepped back one step further: why do I bookmark?</p>\n<p>Usually it\u2019s because I found them interesting and/or useful. What I ended up having was a date of bookmarking, a title, a URL, and some badly applied tags. In this form, bookmarks on my site were completely useless: I didn\u2019t have the content that made them interesting nor a way to search them properly.</p>\n<p>To solve the first problem, the missing content, my initial idea was to leave everything in place and pull an extract of the content to have something to search in. It didn\u2019t go well. There\u2019s a plethora of js;dr<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> sites these days, which don\u2019t, any more, offer a working, plain HTML output without executing JavaScript. For archival purposes, archive.org introduced an arcane file format, WARC<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a>: it saves everything about the site, but there is no way to simply open it for view. Saving pages with crawlers including media files generated a silly amount of data on my system and soon became unsustainable.</p>\n<p>Soon I realised I\u2019m trying to solve a problem others worked on for years, if not decades, so I decided to look into existing bookmark managers. I tried two paid services, Pinboard<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> and Pocket<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a> first. Pocket would be unbeatable, even though it\u2019s not self hosted, if the article extracts they make were available through their API. They are not. Unfortunately Pinboard wasn\u2019t giving me much over my existing crawler solutions.</p>\n<p><strong>The winner was Wallabag</strong><a href=\"https://petermolnar.superfeedr.com/#fn10\">10</a>: it\u2019s self-hosted, which is great, painful to install and set up, which is not, but it\u2019s completely self-sustaining, runs on SQLite and good enough for me.</p>\n<p>There was only one problem: none of these offered archival copies of images, and some of the bookmarks I made were solely for the photos on the sites. I found a format, called MHTML<a href=\"https://petermolnar.superfeedr.com/#fn11\">11</a>, also known as <code>.eml</code>, which is perfect for single-file archives of HTML pages: it inlines all images as base64 encoded data.</p>\n<p>However, <strong>no browser offers a save-as-mhtml in headless mode, so to get your archives, you\u2019ll need to revisit your bookmarks. All of them.</strong> I enabled<a href=\"https://petermolnar.superfeedr.com/#fn12\">12</a> save as MHTML in Chrome (Firefox doesn\u2019t know this format), installed the Wayback Machine<a href=\"https://petermolnar.superfeedr.com/#fn13\">13</a> extension and saved GBs of websites. I also added them into Wallabag. It\u2019s an interesting, though very long journey, but you\u2019ll rediscover a lot of things for sure.</p>\n<p>When this was done, I dropped thousands of bookmark entries from my site.</p>\n<p><strong>If I do want to share a site, I\u2019ll write a note about it, but bookmarks, without context, belong to my archives.</strong></p>\n<h2>(Some) microblog imports should never have happened</h2>\n<p>I had iterations of imports, so after bookmarks it seemed reasonable to check what else may simply be noise on my site.</p>\n<p><em>Back in the days</em> people mostly wrote much lengthier entries: journal-like diary pages, thoughts, and it was, nearly always, anonymous. It all happened under pseudonyms.</p>\n<p>Parallel to this there were the oldschool instant messengers, like ICQ and MSN Messenger. In many cases, though you all had handles, or numbers, or usernames, you knew exactly who you were talking to. Most of these programs had a feature called status message - looking back at it they may have been precursors to microblogging, but there was a huge difference: they were ephemeral.</p>\n<p>With the rise of Twitter and Facebook status message also came (forced?) real identities, and tools letting us post from anywhere, within seconds. The content that earlier landed in status messages - <em>XY is listening to\u2026.</em>, <em>Feels like\u2026</em>, etc - suddenly became readable at any time, sometimes to anyone.</p>\n<p>I had content like this and I am, as well, guilty of posting short, meaningless, out-of-context entries. Imported burps of private life; useless shares of music pointing to long dead links; one-liner jokes, linking to bash.org; tiny replies and notes that should have been sent privately, either via email or some other mechanism.</p>\n<p><strong>Some things are meant to be ephemeral</strong>, no matter how loud the librarian is screaming deep inside me. <strong>Others belong in logs, and probably not on the public internet</strong>.</p>\n<p>I deleted most of them and placed a <code>HTTP 410 Gone</code> message for their URLs.</p>\n<h2>Reposts are messy</h2>\n<p>For a few months I\u2019ve been silently populating a category that I didn\u2019t promote openly: <code>favorite</code>s. At that page, I basically had a lot of <code>repost</code>s: images and galleries, with complete content, but with big fat URLs over them, linking to the original content.</p>\n<p>By using a silo you usually give permission to the silo to use your work and there. Due to the effects of <code>vote</code>s and <code>like</code>s (see later) you do, in fact, boost the visibility of the artist. <em>Note that usually these permissions are much broader, than you imagine: a lawyer reworded the policy of Instagram to let everyone understand, that by using the service, you allow them to do more or less anything the want to with your work<a href=\"https://petermolnar.superfeedr.com/#fn14\">14</a></em>.</p>\n<p>But what is you take content out of a silo? <strong>The majority of images and works are not licensed in any special way, meaning you need to assume full copyright protection</strong>. Copyright prohibits publishing works without the author\u2019s explicit consensus, <strong>so when you repost</strong> something that doesn\u2019t indicate it\u2019s OK with it - Creative Commons, Public Domain, etc -, <strong>what you do is illegal</strong>.</p>\n<p>Also: for me, it feels like reposts, without notifying the creator, even though the licence allows it, are somewhat unfair - which is exactly what I was doing with these. Webmentions<a href=\"https://petermolnar.superfeedr.com/#fn15\">15</a> would like to address this by having an option to send notifications and delete requests, but silos are not there yet to send or to receive any of these.</p>\n<p><strong>There is a very simple solution: avoid reposting anything without being sure it\u2019s licence allows you.</strong> Save it in a private, offline copy, if you really want to. Cweiske had a nice idea about adding source URLs into JPG XMP metadata <a href=\"https://petermolnar.superfeedr.com/#fn16\">16</a>, so you know where it\u2019s from.</p>\n<h2>Silo reactions only make sense within the silo</h2>\n<p>When I started writing this entry, I differentiated 3, not-comment reaction types in silos:</p>\n<p>A <code>reaction</code> <strong>is a social interaction, essentially a templated comment</strong>. \u201cWell done\u201d, \u201cI disagree\u201d, \u201cbuu\u201d, \u201cacknowledged\u201d, \u2764, \ud83d\udc4d, \u2605, and so on. <em>I asked my wife what she thinks about likes, why she uses them, and I got an unexpected answer: because, unlike with regular, text comments, others will not be able react to it - so no trolling or abuse is possible.</em></p>\n<p>A <code>vote</code> <strong>has direct effect on ranking</strong>: think reddit up- and downvotes. Ideally it\u2019s anonymous: list of voters should not be displayed, not even for the owner of the entry.</p>\n<p>A <code>bookmark</code> <strong>is solely for one\u2019s self: save this entry because I value it and I want to be able to find it again</strong>. They should have no social implications or boosting effect at all.</p>\n<p>In many of the silos these are mixed - a Twitter fav used to range from an appreciation to a sarcastic meh<a href=\"https://petermolnar.superfeedr.com/#fn17\">17</a>. With a range of reactions available this may get simpler to differentiate, but a <code>like</code> in Facebook still counts as both a <code>vote</code> and a <code>reaction</code>.</p>\n<p>I thought a lot about reactions and I came to the conclusion that I should not have them on my site. The first problem is they will be linking into a walled garden, without context, maybe pointing at a private(ish) post, available to a limited audience. <strong>If the content is that good, bookmark it as well. If it\u2019s a reaction for the sake of being social, it\u2019s ephemeral.</strong></p>\n<h2>Conclusions</h2>\n<p>Don\u2019t let your ideas take over the things you enjoy. Some ideas can be beneficial, others are passing experiments.</p>\n<p>There\u2019s a lot of data worth collecting: scrobbles, location data, etc., but these are logs, and most of them, in my opinion, should be private. If I\u2019m getting paranoid about how much services know about me, I shouldn\u2019t publish the same information publicly either.</p>\n<p>And finally: keep things simple. I\u2019m finding myself throwing out my filter coffee machine and replacing it with a pot that has a paper filter slot - it makes an even better coffee and I have to care about one less electrical thing. The same should apply for my web presence: the simpler is usually better.</p>\n\n\n<ol><li><p><a href=\"https://stackingthebricks.com/how-blogs-broke-the-web/\">https://stackingthebricks.com/how-blogs-broke-the-web/</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/\">https://indieweb.org/</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"https://petermolnar.net/indieweb-decentralize-web-centralizing-ourselves/\">https://petermolnar.net/indieweb-decentralize-web-centralizing-ourselves/</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"https://petermolnar.net/personal-website-as-archiving-vault/\">https://petermolnar.net/personal-website-as-archiving-vault/</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/bookmark\">https://indieweb.org/bookmark</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"http://tantek.com/2015/069/t1/js-dr-javascript-required-dead\">http://tantek.com/2015/069/t1/js-dr-javascript-required-dead</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.archiveteam.org/index.php?title=Wget_with_WARC_output\">http://www.archiveteam.org/index.php?title=Wget_with_WARC_output</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"http://pinboard.in/\">http://pinboard.in/</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"http://getpocket.com/\">http://getpocket.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n<li><p><a href=\"https://wallabag.org/en\">https://wallabag.org/en</a><a href=\"https://petermolnar.superfeedr.com/#fnref10\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/MHTML\">https://en.wikipedia.org/wiki/MHTML</a><a href=\"https://petermolnar.superfeedr.com/#fnref11\">\u21a9</a></p></li>\n<li><p><a href=\"https://superuser.com/a/445988\">https://superuser.com/a/445988</a><a href=\"https://petermolnar.superfeedr.com/#fnref12\">\u21a9</a></p></li>\n<li><p><a href=\"https://chrome.google.com/webstore/detail/waybackmachine/gofnhkhaadkoabedkchceagnjjicaihi\">https://chrome.google.com/webstore/detail/waybackmachine/gofnhkhaadkoabedkchceagnjjicaihi</a><a href=\"https://petermolnar.superfeedr.com/#fnref13\">\u21a9</a></p></li>\n<li><p><a href=\"https://qz.com/878790/a-lawyer-rewrote-instagrams-terms-of-service-for-kids-now-you-can-understand-all-of-the-private-data-you-and-your-teen-are-giving-up-to-social-media/\">https://qz.com/878790/a-lawyer-rewrote-instagrams-terms-of-service-for-kids-now-you-can-understand-all-of-the-private-data-you-and-your-teen-are-giving-up-to-social-media/</a><a href=\"https://petermolnar.superfeedr.com/#fnref14\">\u21a9</a></p></li>\n<li><p><a href=\"https://webmention.net/draft/#sending-webmentions-for-deleted-posts\">https://webmention.net/draft/#sending-webmentions-for-deleted-posts</a><a href=\"https://petermolnar.superfeedr.com/#fnref15\">\u21a9</a></p></li>\n<li><p><a href=\"http://cweiske.de/tagebuch/exif-url.htm\">http://cweiske.de/tagebuch/exif-url.htm</a><a href=\"https://petermolnar.superfeedr.com/#fnref16\">\u21a9</a></p></li>\n<li><p><a href=\"http://time.com/4336/a-simple-guide-to-twitter-favs/\">http://time.com/4336/a-simple-guide-to-twitter-favs/</a><a href=\"https://petermolnar.superfeedr.com/#fnref17\">\u21a9</a></p></li>\n</ol>", "text": "Have you ever reached the point when you started questioning why you\u2019re doing something? I have, but never before with my website.\n\nWhat is my purpose? The unfortunate, sentient robot Rick created for the sole purpose of passing the butter.\nThe precursor to petermolnar.net started existing for a very simple reason: I wanted an online home and I wanted to put \u201cinteresting\u201d things on it. It was in 1999, before chronological ordering took over the internet.1 Soon it got a blog-ish stream, then a portfolio for my photos, later tech howtos and long journal entries, but one thing was consistent for a very long time: the majority of the content was made by me.\nAfter encountering the indieweb movement2 I started developing the idea of centralising one\u2019s self. I wrote about it not once3 but twice4, but going through with importing bookmarks and favourites had an unexpected outcome: they heavily outweighed my original content.\nDo you know what happens when your own website doesn\u2019t have your own content? It starts feeling distant and unfamiliar. When you get here, you either leave the whole thing behind or reboot it somehow. I couldn\u2019t imagine not having a website, so I rebooted.\nI kept long journal entries; notes, for replies to other websites and for short entries; photos; and tech articles - the rest needs to continue it\u2019s life either archived privately or forgotten for good.\nOutsourcing bookmarks\nThe indieweb wiki entry on bookmark says5:\n\nWhy should you post bookmark posts? Good question. People seem to have reasons for doing so. (please feel free to replace this rhetorical question with actual reasoning)\n\nSince that didn\u2019t help, I stepped back one step further: why do I bookmark?\nUsually it\u2019s because I found them interesting and/or useful. What I ended up having was a date of bookmarking, a title, a URL, and some badly applied tags. In this form, bookmarks on my site were completely useless: I didn\u2019t have the content that made them interesting nor a way to search them properly.\nTo solve the first problem, the missing content, my initial idea was to leave everything in place and pull an extract of the content to have something to search in. It didn\u2019t go well. There\u2019s a plethora of js;dr6 sites these days, which don\u2019t, any more, offer a working, plain HTML output without executing JavaScript. For archival purposes, archive.org introduced an arcane file format, WARC7: it saves everything about the site, but there is no way to simply open it for view. Saving pages with crawlers including media files generated a silly amount of data on my system and soon became unsustainable.\nSoon I realised I\u2019m trying to solve a problem others worked on for years, if not decades, so I decided to look into existing bookmark managers. I tried two paid services, Pinboard8 and Pocket9 first. Pocket would be unbeatable, even though it\u2019s not self hosted, if the article extracts they make were available through their API. They are not. Unfortunately Pinboard wasn\u2019t giving me much over my existing crawler solutions.\nThe winner was Wallabag10: it\u2019s self-hosted, which is great, painful to install and set up, which is not, but it\u2019s completely self-sustaining, runs on SQLite and good enough for me.\nThere was only one problem: none of these offered archival copies of images, and some of the bookmarks I made were solely for the photos on the sites. I found a format, called MHTML11, also known as .eml, which is perfect for single-file archives of HTML pages: it inlines all images as base64 encoded data.\nHowever, no browser offers a save-as-mhtml in headless mode, so to get your archives, you\u2019ll need to revisit your bookmarks. All of them. I enabled12 save as MHTML in Chrome (Firefox doesn\u2019t know this format), installed the Wayback Machine13 extension and saved GBs of websites. I also added them into Wallabag. It\u2019s an interesting, though very long journey, but you\u2019ll rediscover a lot of things for sure.\nWhen this was done, I dropped thousands of bookmark entries from my site.\nIf I do want to share a site, I\u2019ll write a note about it, but bookmarks, without context, belong to my archives.\n(Some) microblog imports should never have happened\nI had iterations of imports, so after bookmarks it seemed reasonable to check what else may simply be noise on my site.\nBack in the days people mostly wrote much lengthier entries: journal-like diary pages, thoughts, and it was, nearly always, anonymous. It all happened under pseudonyms.\nParallel to this there were the oldschool instant messengers, like ICQ and MSN Messenger. In many cases, though you all had handles, or numbers, or usernames, you knew exactly who you were talking to. Most of these programs had a feature called status message - looking back at it they may have been precursors to microblogging, but there was a huge difference: they were ephemeral.\nWith the rise of Twitter and Facebook status message also came (forced?) real identities, and tools letting us post from anywhere, within seconds. The content that earlier landed in status messages - XY is listening to\u2026., Feels like\u2026, etc - suddenly became readable at any time, sometimes to anyone.\nI had content like this and I am, as well, guilty of posting short, meaningless, out-of-context entries. Imported burps of private life; useless shares of music pointing to long dead links; one-liner jokes, linking to bash.org; tiny replies and notes that should have been sent privately, either via email or some other mechanism.\nSome things are meant to be ephemeral, no matter how loud the librarian is screaming deep inside me. Others belong in logs, and probably not on the public internet.\nI deleted most of them and placed a HTTP 410 Gone message for their URLs.\nReposts are messy\nFor a few months I\u2019ve been silently populating a category that I didn\u2019t promote openly: favorites. At that page, I basically had a lot of reposts: images and galleries, with complete content, but with big fat URLs over them, linking to the original content.\nBy using a silo you usually give permission to the silo to use your work and there. Due to the effects of votes and likes (see later) you do, in fact, boost the visibility of the artist. Note that usually these permissions are much broader, than you imagine: a lawyer reworded the policy of Instagram to let everyone understand, that by using the service, you allow them to do more or less anything the want to with your work14.\nBut what is you take content out of a silo? The majority of images and works are not licensed in any special way, meaning you need to assume full copyright protection. Copyright prohibits publishing works without the author\u2019s explicit consensus, so when you repost something that doesn\u2019t indicate it\u2019s OK with it - Creative Commons, Public Domain, etc -, what you do is illegal.\nAlso: for me, it feels like reposts, without notifying the creator, even though the licence allows it, are somewhat unfair - which is exactly what I was doing with these. Webmentions15 would like to address this by having an option to send notifications and delete requests, but silos are not there yet to send or to receive any of these.\nThere is a very simple solution: avoid reposting anything without being sure it\u2019s licence allows you. Save it in a private, offline copy, if you really want to. Cweiske had a nice idea about adding source URLs into JPG XMP metadata 16, so you know where it\u2019s from.\nSilo reactions only make sense within the silo\nWhen I started writing this entry, I differentiated 3, not-comment reaction types in silos:\nA reaction is a social interaction, essentially a templated comment. \u201cWell done\u201d, \u201cI disagree\u201d, \u201cbuu\u201d, \u201cacknowledged\u201d, \u2764, \ud83d\udc4d, \u2605, and so on. I asked my wife what she thinks about likes, why she uses them, and I got an unexpected answer: because, unlike with regular, text comments, others will not be able react to it - so no trolling or abuse is possible.\nA vote has direct effect on ranking: think reddit up- and downvotes. Ideally it\u2019s anonymous: list of voters should not be displayed, not even for the owner of the entry.\nA bookmark is solely for one\u2019s self: save this entry because I value it and I want to be able to find it again. They should have no social implications or boosting effect at all.\nIn many of the silos these are mixed - a Twitter fav used to range from an appreciation to a sarcastic meh17. With a range of reactions available this may get simpler to differentiate, but a like in Facebook still counts as both a vote and a reaction.\nI thought a lot about reactions and I came to the conclusion that I should not have them on my site. The first problem is they will be linking into a walled garden, without context, maybe pointing at a private(ish) post, available to a limited audience. If the content is that good, bookmark it as well. If it\u2019s a reaction for the sake of being social, it\u2019s ephemeral.\nConclusions\nDon\u2019t let your ideas take over the things you enjoy. Some ideas can be beneficial, others are passing experiments.\nThere\u2019s a lot of data worth collecting: scrobbles, location data, etc., but these are logs, and most of them, in my opinion, should be private. If I\u2019m getting paranoid about how much services know about me, I shouldn\u2019t publish the same information publicly either.\nAnd finally: keep things simple. I\u2019m finding myself throwing out my filter coffee machine and replacing it with a pot that has a paper filter slot - it makes an even better coffee and I have to care about one less electrical thing. The same should apply for my web presence: the simpler is usually better.\n\n\nhttps://stackingthebricks.com/how-blogs-broke-the-web/\u21a9\nhttps://indieweb.org/\u21a9\nhttps://petermolnar.net/indieweb-decentralize-web-centralizing-ourselves/\u21a9\nhttps://petermolnar.net/personal-website-as-archiving-vault/\u21a9\nhttps://indieweb.org/bookmark\u21a9\nhttp://tantek.com/2015/069/t1/js-dr-javascript-required-dead\u21a9\nhttp://www.archiveteam.org/index.php?title=Wget_with_WARC_output\u21a9\nhttp://pinboard.in/\u21a9\nhttp://getpocket.com/\u21a9\nhttps://wallabag.org/en\u21a9\nhttps://en.wikipedia.org/wiki/MHTML\u21a9\nhttps://superuser.com/a/445988\u21a9\nhttps://chrome.google.com/webstore/detail/waybackmachine/gofnhkhaadkoabedkchceagnjjicaihi\u21a9\nhttps://qz.com/878790/a-lawyer-rewrote-instagrams-terms-of-service-for-kids-now-you-can-understand-all-of-the-private-data-you-and-your-teen-are-giving-up-to-social-media/\u21a9\nhttps://webmention.net/draft/#sending-webmentions-for-deleted-posts\u21a9\nhttp://cweiske.de/tagebuch/exif-url.htm\u21a9\nhttp://time.com/4336/a-simple-guide-to-twitter-favs/\u21a9" }, "name": "Content, bloat, privacy, archives", "post-type": "article", "_id": "1124829", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/linkedin-public-settings-ignored/", "published": "2018-01-14T12:00:00+00:00", "content": { "html": "<p>A few days ago, on the #indieweb Freenode channel<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a> one of the users asked if we knew an indieweb-friendly way of getting data out of LinkedIn. I wasn\u2019t paying attention to any recent news related to LinkedIn, though I\u2019ve heard a few things, such as they are struggling to prevent data scraping: the note mentioned that they believe it\u2019s a problem that employers keep an eye on changes in LinkedIn profiles via 3rd party. This, indeed, can be an issue, but there are ways to manage this within LinkedIn: your public profile settings<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>.</p>\n<p>In my case, this was set to visible to everyone for years, and by the time I had to set it up (again: years), it was working as intended. But a few days ago, for my surprise, visiting my profile while logged out resulted in this:</p>\n<img src=\"https://aperture-proxy.p3k.io/1978efe54035ccfd218296557b7da601149aa18f/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6c696e6b6564696e2d7075626c69632d73657474696e67732d69676e6f7265642f6c696e6b6564696e2d7075626c69632d70726f66696c652d6973737565732d6175746877616c6c2e706e67\" title=\"linkedin-public-profile-issues-authwall\" alt=\"\" />\nLinkedIn showing a paywall-like \u2018authwall\u2019 for profiles set explicitly to public for everyone\n<p>and this:</p>\n<pre><code>$ wget -O- https://www.linkedin.com/in/petermolnareu\n--2018-01-14 10:26:12-- https://www.linkedin.com/in/petermolnareu\nResolving www.linkedin.com (www.linkedin.com)... 91.225.248.129, 2620:109:c00c:104::b93f:9001\nConnecting to www.linkedin.com (www.linkedin.com)|91.225.248.129|:443... connected.\nHTTP request sent, awaiting response... 999 Request denied\n2018-01-14 10:26:12 ERROR 999: Request denied.</code></pre>\n<p>or this:</p>\n<pre><code>$ curl https://www.linkedin.com/in/petermolnareu\n<html><head>\n<script type=\"text/javascript\">\nwindow.onload = function() {\n // Parse the tracking code from cookies.\n var trk = \"bf\";\n var trkInfo = \"bf\";\n var cookies = document.cookie.split(\"; \");\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"trkCode=\") == 0) && (cookies[i].length > 8)) {\n trk = cookies[i].substring(8);\n }\n else if ((cookies[i].indexOf(\"trkInfo=\") == 0) && (cookies[i].length > 8)) {\n trkInfo = cookies[i].substring(8);\n }\n }\n\n if (window.location.protocol == \"http:\") {\n // If \"sl\" cookie is set, redirect to https.\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"sl=\") == 0) && (cookies[i].length > 3)) {\n window.location.href = \"https:\" + window.location.href.substring(window.location.protocol.length);\n return;\n }\n }\n }\n\n // Get the new domain. For international domains such as\n // fr.linkedin.com, we convert it to www.linkedin.com\n var domain = \"www.linkedin.com\";\n if (domain != location.host) {\n var subdomainIndex = location.host.indexOf(\".linkedin\");\n if (subdomainIndex != -1) {\n domain = \"www\" + location.host.substring(subdomainIndex);\n }\n }\n\n window.location.href = \"https://\" + domain + \"/authwall?trk=\" + trk + \"&trkInfo=\" + trkInfo +\n \"&originalReferer=\" + document.referrer.substr(0, 200) +\n \"&sessionRedirect=\" + encodeURIComponent(window.location.href);\n}\n</script>\n</head></html></code></pre>\nSo I started digging. According to the LinkedIn FAQ<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a> there is a page where you can set your profile\u2019s public visibility. Those settings, for me, were still set to:\n<img src=\"https://aperture-proxy.p3k.io/7fc1cefd271d676ae70f9dbb4c79d45ace61788f/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6c696e6b6564696e2d7075626c69632d73657474696e67732d69676e6f7265642f6c696e6b6564696e2d7075626c69632d70726f66696c652d6973737565732d73657474696e67732e706e67\" title=\"linkedin-public-profile-issues-settings\" alt=\"\" />\nLinkedIn public profile settings\n<p>Despite the settings, there is no public profile for logged out users.</p>\n<p>I\u2019d like to understand what it going on, because so far, this looks like a fat lie from LinkedIn. Hopefully just a bug.</p>\n<h2>UPDATE</h2>\n<p><del>I tried setting referrers and user agents, used different IP addresses, still nothing.</del> I can\u2019t type today and managed to mistype <code>https://google.com</code> - the referrer ended up as <code>https:/google.com</code>. So, following the notes on HN, setting a referrer to Google sometimes works. After a few failures it will lock you out again, referrer or not. This is even uglier if it was a proper authwall for everyone.</p>\n<pre><code>curl 'https://www.linkedin.com/in/petermolnareu' \\\n-e 'https://google.com/' \\\n-H 'accept-encoding: text' -H \\\n'accept-language: en-US,en;q=0.9,' \\\n-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'</code></pre>\n<pre><code><!DOCTYPE html>...</code></pre>\n\n\n<ol><li><p><a href=\"https://chat.indieweb.org/\">https://chat.indieweb.org</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://www.linkedin.com/public-profile/settings\">https://www.linkedin.com/public-profile/settings</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"https://www.linkedin.com/help/linkedin/answer/83?query=public\">https://www.linkedin.com/help/linkedin/answer/83?query=public</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n</ol>", "text": "A few days ago, on the #indieweb Freenode channel1 one of the users asked if we knew an indieweb-friendly way of getting data out of LinkedIn. I wasn\u2019t paying attention to any recent news related to LinkedIn, though I\u2019ve heard a few things, such as they are struggling to prevent data scraping: the note mentioned that they believe it\u2019s a problem that employers keep an eye on changes in LinkedIn profiles via 3rd party. This, indeed, can be an issue, but there are ways to manage this within LinkedIn: your public profile settings2.\nIn my case, this was set to visible to everyone for years, and by the time I had to set it up (again: years), it was working as intended. But a few days ago, for my surprise, visiting my profile while logged out resulted in this:\n\nLinkedIn showing a paywall-like \u2018authwall\u2019 for profiles set explicitly to public for everyone\nand this:\n$ wget -O- https://www.linkedin.com/in/petermolnareu\n--2018-01-14 10:26:12-- https://www.linkedin.com/in/petermolnareu\nResolving www.linkedin.com (www.linkedin.com)... 91.225.248.129, 2620:109:c00c:104::b93f:9001\nConnecting to www.linkedin.com (www.linkedin.com)|91.225.248.129|:443... connected.\nHTTP request sent, awaiting response... 999 Request denied\n2018-01-14 10:26:12 ERROR 999: Request denied.\nor this:\n$ curl https://www.linkedin.com/in/petermolnareu\n<html><head>\n<script type=\"text/javascript\">\nwindow.onload = function() {\n // Parse the tracking code from cookies.\n var trk = \"bf\";\n var trkInfo = \"bf\";\n var cookies = document.cookie.split(\"; \");\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"trkCode=\") == 0) && (cookies[i].length > 8)) {\n trk = cookies[i].substring(8);\n }\n else if ((cookies[i].indexOf(\"trkInfo=\") == 0) && (cookies[i].length > 8)) {\n trkInfo = cookies[i].substring(8);\n }\n }\n\n if (window.location.protocol == \"http:\") {\n // If \"sl\" cookie is set, redirect to https.\n for (var i = 0; i < cookies.length; ++i) {\n if ((cookies[i].indexOf(\"sl=\") == 0) && (cookies[i].length > 3)) {\n window.location.href = \"https:\" + window.location.href.substring(window.location.protocol.length);\n return;\n }\n }\n }\n\n // Get the new domain. For international domains such as\n // fr.linkedin.com, we convert it to www.linkedin.com\n var domain = \"www.linkedin.com\";\n if (domain != location.host) {\n var subdomainIndex = location.host.indexOf(\".linkedin\");\n if (subdomainIndex != -1) {\n domain = \"www\" + location.host.substring(subdomainIndex);\n }\n }\n\n window.location.href = \"https://\" + domain + \"/authwall?trk=\" + trk + \"&trkInfo=\" + trkInfo +\n \"&originalReferer=\" + document.referrer.substr(0, 200) +\n \"&sessionRedirect=\" + encodeURIComponent(window.location.href);\n}\n</script>\n</head></html>\nSo I started digging. According to the LinkedIn FAQ3 there is a page where you can set your profile\u2019s public visibility. Those settings, for me, were still set to:\n\nLinkedIn public profile settings\nDespite the settings, there is no public profile for logged out users.\nI\u2019d like to understand what it going on, because so far, this looks like a fat lie from LinkedIn. Hopefully just a bug.\nUPDATE\nI tried setting referrers and user agents, used different IP addresses, still nothing. I can\u2019t type today and managed to mistype https://google.com - the referrer ended up as https:/google.com. So, following the notes on HN, setting a referrer to Google sometimes works. After a few failures it will lock you out again, referrer or not. This is even uglier if it was a proper authwall for everyone.\ncurl 'https://www.linkedin.com/in/petermolnareu' \\\n-e 'https://google.com/' \\\n-H 'accept-encoding: text' -H \\\n'accept-language: en-US,en;q=0.9,' \\\n-H 'user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'\n<!DOCTYPE html>...\n\n\nhttps://chat.indieweb.org\u21a9\nhttps://www.linkedin.com/public-profile/settings\u21a9\nhttps://www.linkedin.com/help/linkedin/answer/83?query=public\u21a9" }, "name": "LinkedIn is ignoring user settings", "post-type": "article", "_id": "1124853", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/internet-emotional-core/", "published": "2018-03-25T22:20:00+01:00", "content": { "html": "<p>There is a video out there, titled The Fall of The Simpsons: How it Happened<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a>. It starts by introducing a mediocre show that airs every night, called \u201cThe Simpsons\u201d, and compares it to a genius cartoon, that used to air in the early 90s, called \u201cThe Simpsons\u201d. <em>Watch the video, because it\u2019s good, and I\u2019m about to use it\u2019s conclusion</em>.</p>\n<p>It reckons that the tremendous difference is due to shrinking layers in jokes, and, more importantly, in the characters after season 7. I believe something similar happened online, which made the Internet become the internet.</p>\n<p>Many moons ago, while still living in London, the pedal of our flatmate\u2019s sewing machine broke down, and I started digging for replacement parts for her. I stumbled upon a detailed website about ancient capacitors<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>. It resembled other, gorgeous sources of knowledge: one of my all time favourite is leofoo\u2019s site on historical Nikon equipment<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a>. All decades old sites, containing specialist level knowledge on topics only used to be found in books in dusty corners of forgotten libraries.</p>\n<p>There\u2019s an interesting article about how chronological ordering destroyed the original way of curating content<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a> during the early online era, and I think the article got many things right. Try to imagine a slow web: slow connection, slow updates, slow everything. Take away social networks - no Twitter, no Facebook. Forget news aggregators: no more Hacker News or Reddit, not even Technorati. Grab your laptop and put in down on a desk, preferably in a corner - you\u2019re not allowed to move it. Use the HTML version of DuckDuckGo<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a> to search, and navigate with links from one site to another. That\u2019s how it was like; surfing on the <em>information highway</em>, and if you really want to experience it, UbuWeb<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> will allow you to do so.</p>\n<p>Most of the content was hand crafted, arranged to be readable, not searchable; it was human first, not machine first. Nearly everything online had a lot of effort put into it, even if the result was eye-blowing red text on blue background<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a>; somebody worked a lot on it. If you wanted it out there you learnt HTML, how to use FTP, how to link, how to format your page.</p>\n<p>We used to have homepages. Homes on the Internet. <em>Not profiles, no; profile is something the authorities make about you in dossier.</em></p>\n<p>6 years ago Anil Dash released a video, \u201cThe web we lost\u201d<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> and lamented the web 2.0 - <em>I despise this phrase; a horrible buzzword everyone used to label anything with; if you put \u2018cloud\u2019 and \u2018blockchain\u2019 together, you\u2019ll get the level of buzz that was \u2018web 2.0\u2019</em> -, that fall short to social media, but make no mistake: the Internet, the carefully laboured web 1.0, had already went underground when tools made it simple for anyone to publish with just a few clicks.</p>\n<p>The social web lost against social media, because it didn\u2019t (couldn\u2019t?) keep up with making things even simpler. Always on, always instant, always present. It served the purpose of a disposable web perfectly, where the most common goal is to seek fame, attention, to follow trends, to gain followers.</p>\n<p>There are people who never gave up, and are still tirelessly building tools, protocols, ideas, to lead people out of social media. The IndieWeb<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a>\u2019s goals are simple: own your data, have an online home, and connect with others through this. And so it\u2019s completely reasonable to hear:</p>\n<blockquote>\n<p>I want blogging to be as easy as tweeting.<a href=\"https://petermolnar.superfeedr.com/#fn10\">10</a></p>\n</blockquote>\n<p>But\u2026 what will this really achieve? This may sound rude and elitist, but the more I think about it the more I believe: the true way out of the swamp of social media is for things to require a little effort.</p>\n<p>To make people think about what they produce, to make them connect to their online content. It\u2019s like IKEA<a href=\"https://petermolnar.superfeedr.com/#fn11\">11</a>: once you put time, and a minor amount of sweat - or swearing - into it, it\u2019ll feel more yours, than something comfortably delivered.</p>\n<p>The Internet is still present, but it\u2019s shrinking. Content people really care about, customised looking homepages, carefully curated photo galleries are all diminishing. It would be fantastic to return to a world of personal websites, but that needs the love and work that used to be put into them, just like 20 years ago.</p>\n<p>At this point in time, most people don\u2019t seem to relate to their online content. It\u2019s expendable. We need to make them care about it, and simpler tooling, on it\u2019s own, will not help with the lack of emotional connection.</p>\n\n\n<ol><li><p><a href=\"https://www.youtube.com/watch?v=KqFNbCcyFkk\">https://www.youtube.com/watch?v=KqFNbCcyFkk</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.vintage-radio.com/repair-restore-information/valve_capacitors.html\">http://www.vintage-radio.com/repair-restore-information/valve_capacitors.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.mir.com.my/rb/photography/\">http://www.mir.com.my/rb/photography/</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"https://stackingthebricks.com/how-blogs-broke-the-web/\">https://stackingthebricks.com/how-blogs-broke-the-web/</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://duckduckgo.com/html/\">https://duckduckgo.com/html/</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html\">http://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"http://code.divshot.com/geo-bootstrap/\">http://code.divshot.com/geo-bootstrap/</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"http://anildash.com/2012/12/the-web-we-lost.html\">http://anildash.com/2012/12/the-web-we-lost.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/\">https://indieweb.org</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html\">http://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref10\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/IKEA_effect\">https://en.wikipedia.org/wiki/IKEA_effect</a><a href=\"https://petermolnar.superfeedr.com/#fnref11\">\u21a9</a></p></li>\n</ol>", "text": "There is a video out there, titled The Fall of The Simpsons: How it Happened1. It starts by introducing a mediocre show that airs every night, called \u201cThe Simpsons\u201d, and compares it to a genius cartoon, that used to air in the early 90s, called \u201cThe Simpsons\u201d. Watch the video, because it\u2019s good, and I\u2019m about to use it\u2019s conclusion.\nIt reckons that the tremendous difference is due to shrinking layers in jokes, and, more importantly, in the characters after season 7. I believe something similar happened online, which made the Internet become the internet.\nMany moons ago, while still living in London, the pedal of our flatmate\u2019s sewing machine broke down, and I started digging for replacement parts for her. I stumbled upon a detailed website about ancient capacitors2. It resembled other, gorgeous sources of knowledge: one of my all time favourite is leofoo\u2019s site on historical Nikon equipment3. All decades old sites, containing specialist level knowledge on topics only used to be found in books in dusty corners of forgotten libraries.\nThere\u2019s an interesting article about how chronological ordering destroyed the original way of curating content4 during the early online era, and I think the article got many things right. Try to imagine a slow web: slow connection, slow updates, slow everything. Take away social networks - no Twitter, no Facebook. Forget news aggregators: no more Hacker News or Reddit, not even Technorati. Grab your laptop and put in down on a desk, preferably in a corner - you\u2019re not allowed to move it. Use the HTML version of DuckDuckGo5 to search, and navigate with links from one site to another. That\u2019s how it was like; surfing on the information highway, and if you really want to experience it, UbuWeb6 will allow you to do so.\nMost of the content was hand crafted, arranged to be readable, not searchable; it was human first, not machine first. Nearly everything online had a lot of effort put into it, even if the result was eye-blowing red text on blue background7; somebody worked a lot on it. If you wanted it out there you learnt HTML, how to use FTP, how to link, how to format your page.\nWe used to have homepages. Homes on the Internet. Not profiles, no; profile is something the authorities make about you in dossier.\n6 years ago Anil Dash released a video, \u201cThe web we lost\u201d8 and lamented the web 2.0 - I despise this phrase; a horrible buzzword everyone used to label anything with; if you put \u2018cloud\u2019 and \u2018blockchain\u2019 together, you\u2019ll get the level of buzz that was \u2018web 2.0\u2019 -, that fall short to social media, but make no mistake: the Internet, the carefully laboured web 1.0, had already went underground when tools made it simple for anyone to publish with just a few clicks.\nThe social web lost against social media, because it didn\u2019t (couldn\u2019t?) keep up with making things even simpler. Always on, always instant, always present. It served the purpose of a disposable web perfectly, where the most common goal is to seek fame, attention, to follow trends, to gain followers.\nThere are people who never gave up, and are still tirelessly building tools, protocols, ideas, to lead people out of social media. The IndieWeb9\u2019s goals are simple: own your data, have an online home, and connect with others through this. And so it\u2019s completely reasonable to hear:\n\nI want blogging to be as easy as tweeting.10\n\nBut\u2026 what will this really achieve? This may sound rude and elitist, but the more I think about it the more I believe: the true way out of the swamp of social media is for things to require a little effort.\nTo make people think about what they produce, to make them connect to their online content. It\u2019s like IKEA11: once you put time, and a minor amount of sweat - or swearing - into it, it\u2019ll feel more yours, than something comfortably delivered.\nThe Internet is still present, but it\u2019s shrinking. Content people really care about, customised looking homepages, carefully curated photo galleries are all diminishing. It would be fantastic to return to a world of personal websites, but that needs the love and work that used to be put into them, just like 20 years ago.\nAt this point in time, most people don\u2019t seem to relate to their online content. It\u2019s expendable. We need to make them care about it, and simpler tooling, on it\u2019s own, will not help with the lack of emotional connection.\n\n\nhttps://www.youtube.com/watch?v=KqFNbCcyFkk\u21a9\nhttp://www.vintage-radio.com/repair-restore-information/valve_capacitors.html\u21a9\nhttp://www.mir.com.my/rb/photography/\u21a9\nhttps://stackingthebricks.com/how-blogs-broke-the-web/\u21a9\nhttps://duckduckgo.com/html/\u21a9\nhttp://www.slate.com/articles/technology/future_tense/2016/12/ubuweb_the_20_year_old_website_that_collects_the_forgotten_and_the_unfamiliar.html\u21a9\nhttp://code.divshot.com/geo-bootstrap/\u21a9\nhttp://anildash.com/2012/12/the-web-we-lost.html\u21a9\nhttps://indieweb.org\u21a9\nhttp://www.manton.org/2018/03/indieweb-generation-4-and-hosted-domains.html\u21a9\nhttps://en.wikipedia.org/wiki/IKEA_effect\u21a9" }, "name": "The internet that took over the Internet", "post-type": "article", "_id": "1124858", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/running-a-static-indieweb-site/", "published": "2018-08-07T18:33:00+01:00", "content": { "html": "<p>In 2016, I decided to leave WordPress behind. Some of their philosophy, mostly the \u201cdecisions, not options\u201d started to leave the trail I thought to be the right one, but on it\u2019s own, that wouldn\u2019t have been enough: I had a painful experience with media handling hooks, which were respected on the frontend, and not on the backend, at which point, after staring at the backend code for days, I made up my mind: let\u2019s write a static generator.</p>\n<p>This was strictly scratching my own itches<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a>: I wanted to learn Python, but keep using tools, like exiftool and Pandoc, so instead of getting an off the shelf solution, I did actually write my own \u201cstatic generator\u201d - in the end, it\u2019s a glorified script.</p>\n<p>Since the initial idea, I rewrote that script nearly 4 times, mainly to try out language features, async workers for processing, etc, and I\u2019ve learnt a few things in the process. It is called NASG - short for \u2018not another static generator\u2019, and it lives on Github<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>, if anyone wants to see it.</p>\n<p>Here are my learnings.</p>\n<h2>Learning to embrace \u201cbuying in\u201d</h2>\n<h3>webmentions</h3>\n<p>I made a small Python daemon to handle certain requests; one of these routings was to handle incoming webmentions<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a>. It merely put the requests in a queue - apart from some initial sanity checks on the POST request itself -, but it still needed a dynamic part.</p>\n<p>This approach also required parsing the source websites on build. After countless iterations - changing parsing libraries, first within Python, then using XRay<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a> - I had a completely unrelated talk with a fellow sysadmin on how bad we are when in comes to \u201cbuying into\u201d a solution. Basically if you feel like you can do it yourself it\u2019s rather hard for us to pay someone - instead we tend to learn it and just do it, let it be piping in the house of sensor automation.</p>\n<p>None of these - webmentions, syndication, websub - are vital for my site. Do I really need to handle all of them myself? If I make it sure I can replace them, if the service goes out of business, why not use them?</p>\n<p>With that in mind, I decided to use webmention.io<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a> as my incoming webmention (<em>it even gave pingback support back</em>) handler. I ask the service for any new comments on build, save them as YAML + Markdown, so the next time I only need to parse the new ones.</p>\n<p>To send webmentions, Telegraph<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> is nice, simple service, that offers an API access, so you don\u2019t have to deal with webmention endpoint discovery. I put down a text file, with slugified names of the source and target URLs to prevent sending the mention any time.</p>\n<h3>websub</h3>\n<p>In case of websub<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a> superfeedr<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> does the job quite well.</p>\n<h3>syndication</h3>\n<p>For syndication, I decided to go with IFTTT<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a> brid.gy publish<a href=\"https://petermolnar.superfeedr.com/#fn10\">10</a>. IFTTT reads my RSS feed(s) and either creates link-only posts on WordPress<a href=\"https://petermolnar.superfeedr.com/#fn11\">11</a> and Tumblr<a href=\"https://petermolnar.superfeedr.com/#fn12\">12</a>, or sends webmentions to brid.gy to publish to links Twitter<a href=\"https://petermolnar.superfeedr.com/#fn13\">13</a> and complete photos to Flickr<a href=\"https://petermolnar.superfeedr.com/#fn14\">14</a></p>\n<p>I ended up outsourcing my newsletter as well. Years ago I sent a mail around to friends to ask them if they want updates from my site in mail; a few of them did. Unfortunately Google started putting these in either Spam or Promitions, so it never reached people; the very same happened with Blogtrottr<a href=\"https://petermolnar.superfeedr.com/#fn15\">15</a> mails. To overcome this, I set up a Google Group, where only my Gmail account can post, but anyone can subscribe, and another IFTTT hook<a href=\"https://petermolnar.superfeedr.com/#fn16\">16</a> that sends mails to that group with the contents of anything new in my RSS feed.</p>\n<h2>Search: keep it server side</h2>\n<p>I spent days looking for a way to integrate JavaScript based search (lunr.js or elasticlunr.js) in my site. I went as far as embedding JS in Python to pre-populate a search index - but to my horror, that index was 7.8MB at it\u2019s smallest size.</p>\n<p>It turns out that the simplest solution is what I already had: SQLite, but it needed some alterations.</p>\n<p>The initial solution required a small Python daemon to run in the background and spit extremely simple results back for a query. Besides the trouble of running another daemon, it needed the copy of the nasg git tree for the templates, a virtualenv for sanic (the HTTP server engine I used), and Jinja2 (templating), and a few other bits.</p>\n<p>However, there is a simpler, yet uglier solution. Nearly every webserver out in the wild has PHP support these days, including mine, because I\u2019m still running WordPress for friends and family.</p>\n<p>To overcome the problem, I made a Jinja2 template, that creates a PHP file, which read-only reads the SQLite file I pre-populate with the search corpus during build. Unfortunately it\u2019s PHP 7.0, so instead of the FTS5 engine, I had to step back and use the FTS4 - still good enough. Apart from a plain, dead simple PHP engine that has SQLite support, there is no need for anything else, and because the SQLite file is read-only, there\u2019s no lock-collision issue either.</p>\n<h2>About those markup languages\u2026</h2>\n<h3>YAML can get messy</h3>\n<p>I went with the most common post format for static sites: YAML metadata + Markdown. Soon I started seeing weird errors with \u2019 and \" characters, so I dug into the YAML specification - don\u2019t do it, it\u2019s a hell dimension. There is a subset of YAML, title StrictYAML<a href=\"https://petermolnar.superfeedr.com/#fn17\">17</a> to address some of these problems, but the short summary is: YAML or not, try to use as simple markup as possible, and be consistent.</p>\n<pre><code>title: post title\nsummary: single-line long summary\npublished: 2018-08-07T10:00:00+00:00\ntags:\n- indieweb\nsyndicate:\n- https://something.com/xyz</code></pre>\n<p>If one decides to use lists by newline and <code>-</code>, stick to that. No inline <code>[]</code> lists, no spaced <code>-</code> prefix; be consistent.</p>\n<p>Same applies for dates and times. While I thought the \u201ccorrect\u201d date format is ISO 8601, that turned out to be a subset of it, named RFC 3339<a href=\"https://petermolnar.superfeedr.com/#fn18\">18</a>. Unfortunately I started using <code>+0000</code> format instead of <code>+00:00</code> from the beginning, so I\u2019ll stick to that.</p>\n<h3>Markdown can also get messy</h3>\n<p>There are valid arguments against Markdown<a href=\"https://petermolnar.superfeedr.com/#fn19\">19</a>, so before choosing that as my main format, I tested as many as I could<a href=\"https://petermolnar.superfeedr.com/#fn20\">20</a> - in the end, I decided to stick to an extended version of Markdown, because that is still the closest-to-plain-text for my eyes. I also found Typora, which is a very nice Markdown WYSIWYG editor<a href=\"https://petermolnar.superfeedr.com/#fn21\">21</a>. <em>Yes, unfortunately, it\u2019s electron based. I\u2019ll swallow this frog for now.</em></p>\n<p>The \u201cextensions\u201d I use with Markdown:</p>\n<ul><li>footnotes - <em>my links are foonotes, so they can be printed</em></li>\n<li>pipe_tables</li>\n<li>strikeout - <em><del>cause it\u2019s useful for snarky lines</del></em></li>\n<li>raw_html</li>\n<li>definition_lists - <em>they are useful, and they were also present on the very first website ever</em></li>\n<li>backtick_code_blocks - <em>``` type code blocks</em></li>\n<li>fenced_code_attributes - <em>language tag for code blocks</em></li>\n<li>lists_without_preceding_blankline</li>\n<li>autolink_bare_uris - <em>otherwise my URLs in the footnotes are mere text</em></li>\n</ul><p>I\u2019ve tried using the Python Markdown module; the end result was utterly broken HTML when I had code blocks with regexes that collided with the regexes Python Markdown was using. I tried the Python markdown2 module - worked better, didn\u2019t support language tag for code blocks.</p>\n<p>In the end, I went back to where I started: Pandoc<a href=\"https://petermolnar.superfeedr.com/#fn22\">22</a>. The regeneration of the whole site is ~60 seconds instead of ~20s with markdown2, but it doesn\u2019t really matter - it\u2019s still fast.</p>\n<pre><code>pandoc --to=html5 --quiet --no-highlight --from=markdown+footnotes+pipe_tables+strikeout+raw_html+definition_lists+backtick_code_blocks+fenced_code_attributes+lists_without_preceding_blankline+autolink_bare_uris</code></pre>\n<p>The take away is the same with YAML: do your own ruleset and stick to it; don\u2019t mix other flavours in.</p>\n<h3>Syntax highlighting is really messy</h3>\n<p>Pandoc has a built-in syntax highlighting method; so does the Python Markdown module (via Codehilite).</p>\n<p>I have some entries that can break both, and break them bad.</p>\n<p>Besides broken, Codehilite is VERBOSE. At a certain point, it managed to add 60KB of HTML markup to my text.</p>\n<p>A long while ago I tried to completely eliminate JavaScript from my site, because I\u2019m tired of the current trends. However, JS has it\u2019s place, especially as a progessive enhancement<a href=\"https://petermolnar.superfeedr.com/#fn23\">23</a>.</p>\n<p>That in mind, I went back to the solution that worked the best so far: prism.js<a href=\"https://petermolnar.superfeedr.com/#fn24\">24</a> The difference this time I that I only add it when there is a code block with language property, and I inline the whole JS block in the code - the \u2018developer\u2019 version, supporting a lot of languages, weighs around 58KB, which is a lot, but it works very nice, and it very fast.</p>\n<p>No JS only means no syntax highlight, but at least my HTML code is readable, unlike with CodeHilite.</p>\n<h2>Summary</h2>\n<p>Static sites come with compromises when it comes to interactions, let that be webmentions, search, pubsub. They need either external services, or some simple, dynamic parts.</p>\n<p>If you do go with dynamic, try to keep it as simple as possible. If the webserver has PHP support avoid adding a Python daemon and use that PHP instead.</p>\n<p>There are very good, completely free services out there, run by <del>mad scientists</del> enthusiasts, like webmention.io and brid.gy. It\u2019s perfectly fine to use them.</p>\n<p>Keep your markup consistent and don\u2019t deviate from the feature set you really need.</p>\n<p>JavaScript has it\u2019s place, and prism.js is potentially the nicest syntax highlighter currently available for the web.</p>\n\n\n<ol><li><p><a href=\"https://indieweb.org/scratch_your_own_itch\">https://indieweb.org/scratch_your_own_itch</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://github.com/petermolnar/nasg/\">https://github.com/petermolnar/nasg/</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"http://indieweb.org/webmention\">http://indieweb.org/webmention</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"https://github.com/aaronpk/xray\">https://github.com/aaronpk/xray</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://webmention.io/\">https://webmention.io/</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"http://telegraph.p3k.io/\">http://telegraph.p3k.io/</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/websub\">https://indieweb.org/websub</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"https://superfeedr.com/\">https://superfeedr.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"http://ifttt.com/\">http://ifttt.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n<li><p><a href=\"https://brid.gy/about#publishing\">https://brid.gy/about#publishing</a><a href=\"https://petermolnar.superfeedr.com/#fnref10\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83096071d-syndicate-to-wordpress-com\">https://ifttt.com/applets/83096071d-syndicate-to-wordpress-com</a><a href=\"https://petermolnar.superfeedr.com/#fnref11\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095945d-syndicate-to-tumblr\">https://ifttt.com/applets/83095945d-syndicate-to-tumblr</a><a href=\"https://petermolnar.superfeedr.com/#fnref12\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish\">https://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish</a><a href=\"https://petermolnar.superfeedr.com/#fnref13\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr\">https://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr</a><a href=\"https://petermolnar.superfeedr.com/#fnref14\">\u21a9</a></p></li>\n<li><p><a href=\"https://blogtrottr.com/\">https://blogtrottr.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref15\">\u21a9</a></p></li>\n<li><p><a href=\"https://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com\">https://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com</a><a href=\"https://petermolnar.superfeedr.com/#fnref16\">\u21a9</a></p></li>\n<li><p><a href=\"http://hitchdev.com/strictyaml/features-removed/\">http://hitchdev.com/strictyaml/features-removed/</a><a href=\"https://petermolnar.superfeedr.com/#fnref17\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/RFC_3339\">https://en.wikipedia.org/wiki/RFC_3339</a><a href=\"https://petermolnar.superfeedr.com/#fnref18\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/markdown#Criticism\">https://indieweb.org/markdown#Criticism</a><a href=\"https://petermolnar.superfeedr.com/#fnref19\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/List_of_lightweight_markup_languages\">https://en.wikipedia.org/wiki/List_of_lightweight_markup_languages</a><a href=\"https://petermolnar.superfeedr.com/#fnref20\">\u21a9</a></p></li>\n<li><p><a href=\"http://typora.io/\">http://typora.io/</a><a href=\"https://petermolnar.superfeedr.com/#fnref21\">\u21a9</a></p></li>\n<li><p><a href=\"http://pandoc.org/MANUAL.html#pandocs-markdown\">http://pandoc.org/MANUAL.html#pandocs-markdown</a><a href=\"https://petermolnar.superfeedr.com/#fnref22\">\u21a9</a></p></li>\n<li><p><a href=\"https://en.wikipedia.org/wiki/Progressive_enhancement\">https://en.wikipedia.org/wiki/Progressive_enhancement</a><a href=\"https://petermolnar.superfeedr.com/#fnref23\">\u21a9</a></p></li>\n<li><p><a href=\"https://prismjs.com/\">https://prismjs.com/</a><a href=\"https://petermolnar.superfeedr.com/#fnref24\">\u21a9</a></p></li>\n</ol>", "text": "In 2016, I decided to leave WordPress behind. Some of their philosophy, mostly the \u201cdecisions, not options\u201d started to leave the trail I thought to be the right one, but on it\u2019s own, that wouldn\u2019t have been enough: I had a painful experience with media handling hooks, which were respected on the frontend, and not on the backend, at which point, after staring at the backend code for days, I made up my mind: let\u2019s write a static generator.\nThis was strictly scratching my own itches1: I wanted to learn Python, but keep using tools, like exiftool and Pandoc, so instead of getting an off the shelf solution, I did actually write my own \u201cstatic generator\u201d - in the end, it\u2019s a glorified script.\nSince the initial idea, I rewrote that script nearly 4 times, mainly to try out language features, async workers for processing, etc, and I\u2019ve learnt a few things in the process. It is called NASG - short for \u2018not another static generator\u2019, and it lives on Github2, if anyone wants to see it.\nHere are my learnings.\nLearning to embrace \u201cbuying in\u201d\nwebmentions\nI made a small Python daemon to handle certain requests; one of these routings was to handle incoming webmentions3. It merely put the requests in a queue - apart from some initial sanity checks on the POST request itself -, but it still needed a dynamic part.\nThis approach also required parsing the source websites on build. After countless iterations - changing parsing libraries, first within Python, then using XRay4 - I had a completely unrelated talk with a fellow sysadmin on how bad we are when in comes to \u201cbuying into\u201d a solution. Basically if you feel like you can do it yourself it\u2019s rather hard for us to pay someone - instead we tend to learn it and just do it, let it be piping in the house of sensor automation.\nNone of these - webmentions, syndication, websub - are vital for my site. Do I really need to handle all of them myself? If I make it sure I can replace them, if the service goes out of business, why not use them?\nWith that in mind, I decided to use webmention.io5 as my incoming webmention (it even gave pingback support back) handler. I ask the service for any new comments on build, save them as YAML + Markdown, so the next time I only need to parse the new ones.\nTo send webmentions, Telegraph6 is nice, simple service, that offers an API access, so you don\u2019t have to deal with webmention endpoint discovery. I put down a text file, with slugified names of the source and target URLs to prevent sending the mention any time.\nwebsub\nIn case of websub7 superfeedr8 does the job quite well.\nsyndication\nFor syndication, I decided to go with IFTTT9 brid.gy publish10. IFTTT reads my RSS feed(s) and either creates link-only posts on WordPress11 and Tumblr12, or sends webmentions to brid.gy to publish to links Twitter13 and complete photos to Flickr14\nI ended up outsourcing my newsletter as well. Years ago I sent a mail around to friends to ask them if they want updates from my site in mail; a few of them did. Unfortunately Google started putting these in either Spam or Promitions, so it never reached people; the very same happened with Blogtrottr15 mails. To overcome this, I set up a Google Group, where only my Gmail account can post, but anyone can subscribe, and another IFTTT hook16 that sends mails to that group with the contents of anything new in my RSS feed.\nSearch: keep it server side\nI spent days looking for a way to integrate JavaScript based search (lunr.js or elasticlunr.js) in my site. I went as far as embedding JS in Python to pre-populate a search index - but to my horror, that index was 7.8MB at it\u2019s smallest size.\nIt turns out that the simplest solution is what I already had: SQLite, but it needed some alterations.\nThe initial solution required a small Python daemon to run in the background and spit extremely simple results back for a query. Besides the trouble of running another daemon, it needed the copy of the nasg git tree for the templates, a virtualenv for sanic (the HTTP server engine I used), and Jinja2 (templating), and a few other bits.\nHowever, there is a simpler, yet uglier solution. Nearly every webserver out in the wild has PHP support these days, including mine, because I\u2019m still running WordPress for friends and family.\nTo overcome the problem, I made a Jinja2 template, that creates a PHP file, which read-only reads the SQLite file I pre-populate with the search corpus during build. Unfortunately it\u2019s PHP 7.0, so instead of the FTS5 engine, I had to step back and use the FTS4 - still good enough. Apart from a plain, dead simple PHP engine that has SQLite support, there is no need for anything else, and because the SQLite file is read-only, there\u2019s no lock-collision issue either.\nAbout those markup languages\u2026\nYAML can get messy\nI went with the most common post format for static sites: YAML metadata + Markdown. Soon I started seeing weird errors with \u2019 and \" characters, so I dug into the YAML specification - don\u2019t do it, it\u2019s a hell dimension. There is a subset of YAML, title StrictYAML17 to address some of these problems, but the short summary is: YAML or not, try to use as simple markup as possible, and be consistent.\ntitle: post title\nsummary: single-line long summary\npublished: 2018-08-07T10:00:00+00:00\ntags:\n- indieweb\nsyndicate:\n- https://something.com/xyz\nIf one decides to use lists by newline and -, stick to that. No inline [] lists, no spaced - prefix; be consistent.\nSame applies for dates and times. While I thought the \u201ccorrect\u201d date format is ISO 8601, that turned out to be a subset of it, named RFC 333918. Unfortunately I started using +0000 format instead of +00:00 from the beginning, so I\u2019ll stick to that.\nMarkdown can also get messy\nThere are valid arguments against Markdown19, so before choosing that as my main format, I tested as many as I could20 - in the end, I decided to stick to an extended version of Markdown, because that is still the closest-to-plain-text for my eyes. I also found Typora, which is a very nice Markdown WYSIWYG editor21. Yes, unfortunately, it\u2019s electron based. I\u2019ll swallow this frog for now.\nThe \u201cextensions\u201d I use with Markdown:\nfootnotes - my links are foonotes, so they can be printed\npipe_tables\nstrikeout - cause it\u2019s useful for snarky lines\nraw_html\ndefinition_lists - they are useful, and they were also present on the very first website ever\nbacktick_code_blocks - ``` type code blocks\nfenced_code_attributes - language tag for code blocks\nlists_without_preceding_blankline\nautolink_bare_uris - otherwise my URLs in the footnotes are mere text\nI\u2019ve tried using the Python Markdown module; the end result was utterly broken HTML when I had code blocks with regexes that collided with the regexes Python Markdown was using. I tried the Python markdown2 module - worked better, didn\u2019t support language tag for code blocks.\nIn the end, I went back to where I started: Pandoc22. The regeneration of the whole site is ~60 seconds instead of ~20s with markdown2, but it doesn\u2019t really matter - it\u2019s still fast.\npandoc --to=html5 --quiet --no-highlight --from=markdown+footnotes+pipe_tables+strikeout+raw_html+definition_lists+backtick_code_blocks+fenced_code_attributes+lists_without_preceding_blankline+autolink_bare_uris\nThe take away is the same with YAML: do your own ruleset and stick to it; don\u2019t mix other flavours in.\nSyntax highlighting is really messy\nPandoc has a built-in syntax highlighting method; so does the Python Markdown module (via Codehilite).\nI have some entries that can break both, and break them bad.\nBesides broken, Codehilite is VERBOSE. At a certain point, it managed to add 60KB of HTML markup to my text.\nA long while ago I tried to completely eliminate JavaScript from my site, because I\u2019m tired of the current trends. However, JS has it\u2019s place, especially as a progessive enhancement23.\nThat in mind, I went back to the solution that worked the best so far: prism.js24 The difference this time I that I only add it when there is a code block with language property, and I inline the whole JS block in the code - the \u2018developer\u2019 version, supporting a lot of languages, weighs around 58KB, which is a lot, but it works very nice, and it very fast.\nNo JS only means no syntax highlight, but at least my HTML code is readable, unlike with CodeHilite.\nSummary\nStatic sites come with compromises when it comes to interactions, let that be webmentions, search, pubsub. They need either external services, or some simple, dynamic parts.\nIf you do go with dynamic, try to keep it as simple as possible. If the webserver has PHP support avoid adding a Python daemon and use that PHP instead.\nThere are very good, completely free services out there, run by mad scientists enthusiasts, like webmention.io and brid.gy. It\u2019s perfectly fine to use them.\nKeep your markup consistent and don\u2019t deviate from the feature set you really need.\nJavaScript has it\u2019s place, and prism.js is potentially the nicest syntax highlighter currently available for the web.\n\n\nhttps://indieweb.org/scratch_your_own_itch\u21a9\nhttps://github.com/petermolnar/nasg/\u21a9\nhttp://indieweb.org/webmention\u21a9\nhttps://github.com/aaronpk/xray\u21a9\nhttps://webmention.io/\u21a9\nhttp://telegraph.p3k.io/\u21a9\nhttps://indieweb.org/websub\u21a9\nhttps://superfeedr.com/\u21a9\nhttp://ifttt.com/\u21a9\nhttps://brid.gy/about#publishing\u21a9\nhttps://ifttt.com/applets/83096071d-syndicate-to-wordpress-com\u21a9\nhttps://ifttt.com/applets/83095945d-syndicate-to-tumblr\u21a9\nhttps://ifttt.com/applets/83095698d-syndicate-to-brid-gy-twitter-publish\u21a9\nhttps://ifttt.com/applets/83095735d-syndicate-to-brid-gy-publish-flickr\u21a9\nhttps://blogtrottr.com/\u21a9\nhttps://ifttt.com/applets/83095496d-syndicate-to-petermolnarnet-googlegroups-com\u21a9\nhttp://hitchdev.com/strictyaml/features-removed/\u21a9\nhttps://en.wikipedia.org/wiki/RFC_3339\u21a9\nhttps://indieweb.org/markdown#Criticism\u21a9\nhttps://en.wikipedia.org/wiki/List_of_lightweight_markup_languages\u21a9\nhttp://typora.io/\u21a9\nhttp://pandoc.org/MANUAL.html#pandocs-markdown\u21a9\nhttps://en.wikipedia.org/wiki/Progressive_enhancement\u21a9\nhttps://prismjs.com/\u21a9" }, "name": "Lessons of running a (semi) static, Indieweb-friendly site for 2 years", "post-type": "article", "_id": "1124868", "_source": "268", "_is_read": true }
{ "type": "entry", "author": { "name": "mail@petermolnar.net (Peter Molnar)", "url": "https://petermolnar.superfeedr.com/", "photo": null }, "url": "https://petermolnar.net/location-tracking-without-server/", "published": "2018-09-27T11:05:00+01:00", "content": { "html": "<p>Nearly all self-hosted location tracking Android applications are based on server-client architecture: the one on the phone collects only a small points, if not only one, and sends it to a configured server. Traccar<a href=\"https://petermolnar.superfeedr.com/#fn1\">1</a>, Owntracks<a href=\"https://petermolnar.superfeedr.com/#fn2\">2</a>, etc.</p>\n<p>While this setup is useful, it doesn\u2019t fit in my static, unless it hurts<a href=\"https://petermolnar.superfeedr.com/#fn3\">3</a> approach, and it needs data connectivity, which can be tricky during abroad trips. The rare occasions in rural Scotland and Wales tought me data connectivity is not omnipresent at all.</p>\n<p>There used to be a magnificent little location tracker, which, besides the server-client approach, could store the location data in CSV and KML files locally: Backitude<a href=\"https://petermolnar.superfeedr.com/#fn4\">4</a>. The program is gone from Play store, I have no idea, why, but I have a copy of the last APK of it<a href=\"https://petermolnar.superfeedr.com/#fn5\">5</a>.</p>\n<p>My flow is the following:</p>\n<ul><li>Backitude saves the CSV files</li>\n<li>Syncthing<a href=\"https://petermolnar.superfeedr.com/#fn6\">6</a> syncs the phone and the laptop</li>\n<li>the laptop has a Python script that imports the CSV into SQLite to eliminate duplicates</li>\n<li>the same script queries against Bing to get altitude information for missing altitudes</li>\n<li>as a final step, the script exports daily GPX files</li>\n<li>on the laptop, GpsPrune helps me visualize and measure trips</li>\n</ul><h2>Backitude configuration</h2>\n<p>These are the modified setting properties:</p>\n<ul><li>Enable backitude: yes</li>\n<li>Settings\n<ul><li>Standard Mode Settings\n<ul><li>Time Interval Selection: 1 minute</li>\n<li>Location Polling Timeout: 5 minutes</li>\n<li>Display update message: no</li>\n</ul></li>\n<li>Wifi Mode Settings\n<ul><li>Wi-Fi Mode Enabled: yes</li>\n<li>Time Interval Options: 1 hour</li>\n<li>Location Polling Timeout: 5 minutes</li>\n</ul></li>\n<li>Update Settings\n<ul><li>Minimum Change in Distance: 10 meters</li>\n</ul></li>\n<li>Accuracy Settings\n<ul><li>Minimum GPS accuracy: 12 meters</li>\n<li>Minimum Wi-Fi accuracy: 20 meters</li>\n</ul></li>\n<li>Internal Memory Storage Options\n<ul><li>KML and CSV</li>\n</ul></li>\n<li>Display Failure Notifications: no</li>\n</ul></li>\n</ul><p>I have an exported preferences file available<a href=\"https://petermolnar.superfeedr.com/#fn7\">7</a>.</p>\n<h2>Syncthing</h2>\n<p>The syncthing configuration is optional; it could be simple done by manual transfers from the phone. It\u2019s also not the most simple thing to do, so I\u2019ll let the Syncting Documentation<a href=\"https://petermolnar.superfeedr.com/#fn8\">8</a> take care of describing the how-tos.</p>\n<h2>Python script</h2>\n<p>Before jumping to the script, there are 3 Python modules it needs:</p>\n<pre><code>pip3 install --user arrow gpxpy requests</code></pre>\n<p>And the script itself - please replace the <code>INBASE</code>, <code>OUTBASE</code>, and <code>BINGKEY</code> properties. To get a Bing key, visit Bing<a href=\"https://petermolnar.superfeedr.com/#fn9\">9</a>.</p>\n<pre><code>import os\nimport sqlite3\nimport csv\nimport glob\nimport arrow\nimport re\nimport gpxpy.gpx\nimport requests\n\nINBASE=\"/path/to/your/syncthing/gps/files\"\nOUTBASE=\"/path/for/sqlite/and/gpx/output\"\nBINGKEY=\"get a bing maps key and insert it here\"\n\ndef parse(row):\n DATE = re.compile(\n r'^(?P<year>[0-9]{4})-(?P<month>[0-9]{2})-(?P<day>[0-9]{2})T'\n r'(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2})\\.(?P<subsec>[0-9]{3})Z$'\n )\n\n lat = row[0]\n lon = row[1]\n acc = row[2]\n alt = row[3]\n match = DATE.match(row[4])\n # in theory, arrow should have been able to parse the date, but I couldn't get\n # it working\n epoch = arrow.get(\"%s-%s-%s %s %s\" % (\n match.group('year'),\n match.group('month'),\n match.group('day'),\n match.group('time'),\n match.group('subsec')\n ), 'YYYY-MM-DD hh:mm:ss SSS').timestamp\n return(epoch,lat,lon,alt,acc)\n\ndef exists(db, epoch, lat, lon):\n return db.execute('''\n SELECT\n *\n FROM\n data\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n ''', (epoch, lat, lon)).fetchone()\n\ndef ins(db, epoch,lat,lon,alt,acc):\n if exists(db, epoch, lat, lon):\n return\n print('inserting data point with epoch %d' % (epoch))\n db.execute('''INSERT INTO data (epoch, latitude, longitude, altitude, accuracy) VALUES (?,?,?,?,?);''', (\n epoch,\n lat,\n lon,\n alt,\n acc\n ))\n\n\nif __name__ == '__main__':\n db = sqlite3.connect(os.path.join(OUTBASE, 'location-log.sqlite'))\n db.execute('PRAGMA auto_vacuum = INCREMENTAL;')\n db.execute('PRAGMA journal_mode = MEMORY;')\n db.execute('PRAGMA temp_store = MEMORY;')\n db.execute('PRAGMA locking_mode = NORMAL;')\n db.execute('PRAGMA synchronous = FULL;')\n db.execute('PRAGMA encoding = \"UTF-8\";')\n\n files = glob.glob(os.path.join(INBASE, '*.csv'))\n for logfile in files:\n with open(logfile) as csvfile:\n try:\n reader = csv.reader(csvfile)\n except Exception as e:\n print('failed to open CSV reader for file: %s; %s' % (logfile, e))\n continue\n # skip the first row, that's headers\n headers = next(reader, None)\n for row in reader:\n epoch,lat,lon,alt,acc = parse(row)\n ins(db,epoch,lat,lon,alt,acc)\n # there's no need to commit per line, per file should be safe enough\n db.commit()\n\n db.execute('PRAGMA auto_vacuum;')\n\n results = db.execute('''\n SELECT\n *\n FROM\n data\n ORDER BY epoch ASC''').fetchall()\n prevdate = None\n gpx = gpxpy.gpx.GPX()\n\n for epoch, lat, lon, alt, acc in results:\n # in case you know your altitude might actually be valid with negative\n # values you may want to remove the -10\n if alt == 'NULL' or alt < -10:\n url = \"http://dev.virtualearth.net/REST/v1/Elevation/List?points=%s,%s&key=%s\" % (\n lat,\n lon,\n BINGKEY\n )\n bing = requests.get(url).json()\n # gotta love enterprise API endpoints\n if not bing or \\\n 'resourceSets' not in bing or \\\n not len(bing['resourceSets']) or \\\n 'resources' not in bing['resourceSets'][0] or \\\n not len(bing['resourceSets'][0]) or \\\n 'elevations' not in bing['resourceSets'][0]['resources'][0] or \\\n not bing['resourceSets'][0]['resources'][0]['elevations']:\n alt = 0\n else:\n alt = float(bing['resourceSets'][0]['resources'][0]['elevations'][0])\n print('got altitude from bing: %s for %s,%s' % (alt,lat,lon))\n db.execute('''\n UPDATE\n data\n SET\n altitude = ?\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n LIMIT 1\n ''',(alt, epoch, lat, lon))\n db.commit()\n del(bing)\n del(url)\n date = arrow.get(epoch).format('YYYY-MM-DD')\n if not prevdate or prevdate != date:\n # write previous out\n gpxfile = os.path.join(OUTBASE, \"%s.gpx\" % (date))\n with open(gpxfile, 'wt') as f:\n f.write(gpx.to_xml())\n print('created file: %s' % gpxfile)\n\n # create new\n gpx = gpxpy.gpx.GPX()\n prevdate = date\n\n # Create first track in our GPX:\n gpx_track = gpxpy.gpx.GPXTrack()\n gpx.tracks.append(gpx_track)\n\n # Create first segment in our GPX track:\n gpx_segment = gpxpy.gpx.GPXTrackSegment()\n gpx_track.segments.append(gpx_segment)\n\n # Create points:\n gpx_segment.points.append(\n gpxpy.gpx.GPXTrackPoint(\n lat,\n lon,\n elevation=alt,\n time=arrow.get(epoch).datetime\n )\n )\n\n db.close()\n</code></pre>\n<p>Once this is done, the <code>OUTBASE</code> directory will be populated by <code>.gpx</code> files, one per day.</p>\n<h2>GpsPrune</h2>\n<p>GpsPrune is a desktop, QT based GPX track visualizer. It needs data connectivity to have nice maps in the background, but it can do a lot of funky things, including editing GPX tracks.</p>\n<pre><code>sudo apt install gpsprune</code></pre>\n<p><strong>Keep it in mind that the export script overwrites the GPX files, so the data needs to be fixed in the SQLite database.</strong></p>\n<p>This is an example screenshot of GpsPrune, about our 2 day walk down from Mount Emei and it\u2019s endless stairs:</p>\n<a href=\"https://petermolnar.net/location-tracking-without-server/emei_b.jpg\"> <img src=\"https://aperture-proxy.p3k.io/fb0eec6f7ce86a08fc973871c604e30bbf8b8c69/68747470733a2f2f70657465726d6f6c6e61722e6e65742f6c6f636174696f6e2d747261636b696e672d776974686f75742d7365727665722f656d65692e6a7067\" title=\"emei\" alt=\"\" /></a>\n\nemei\n<p>Happy tracking!</p>\n\n\n<ol><li><p><a href=\"https://www.traccar.org/\">https://www.traccar.org/</a><a href=\"https://petermolnar.superfeedr.com/#fnref1\">\u21a9</a></p></li>\n<li><p><a href=\"https://owntracks.org/\">https://owntracks.org/</a><a href=\"https://petermolnar.superfeedr.com/#fnref2\">\u21a9</a></p></li>\n<li><p><a href=\"https://indieweb.org/manual_until_it_hurts\">https://indieweb.org/manual_until_it_hurts</a><a href=\"https://petermolnar.superfeedr.com/#fnref3\">\u21a9</a></p></li>\n<li><p><a href=\"http://www.gpsies.com/backitude.do\">http://www.gpsies.com/backitude.do</a><a href=\"https://petermolnar.superfeedr.com/#fnref4\">\u21a9</a></p></li>\n<li><p><a href=\"https://petermolnar.superfeedr.com/gaugler.backitude.apk\">gaugler.backitude.apk</a><a href=\"https://petermolnar.superfeedr.com/#fnref5\">\u21a9</a></p></li>\n<li><p><a href=\"https://syncthing.net/\">https://syncthing.net/</a><a href=\"https://petermolnar.superfeedr.com/#fnref6\">\u21a9</a></p></li>\n<li><p><a href=\"https://petermolnar.superfeedr.com/backitude.prefs\">backitude.prefs</a><a href=\"https://petermolnar.superfeedr.com/#fnref7\">\u21a9</a></p></li>\n<li><p><a href=\"https://docs.syncthing.net/intro/getting-started.html\">https://docs.syncthing.net/intro/getting-started.html</a><a href=\"https://petermolnar.superfeedr.com/#fnref8\">\u21a9</a></p></li>\n<li><p><a href=\"https://msdn.microsoft.com/en-us/library/ff428642\">https://msdn.microsoft.com/en-us/library/ff428642</a><a href=\"https://petermolnar.superfeedr.com/#fnref9\">\u21a9</a></p></li>\n</ol>", "text": "Nearly all self-hosted location tracking Android applications are based on server-client architecture: the one on the phone collects only a small points, if not only one, and sends it to a configured server. Traccar1, Owntracks2, etc.\nWhile this setup is useful, it doesn\u2019t fit in my static, unless it hurts3 approach, and it needs data connectivity, which can be tricky during abroad trips. The rare occasions in rural Scotland and Wales tought me data connectivity is not omnipresent at all.\nThere used to be a magnificent little location tracker, which, besides the server-client approach, could store the location data in CSV and KML files locally: Backitude4. The program is gone from Play store, I have no idea, why, but I have a copy of the last APK of it5.\nMy flow is the following:\nBackitude saves the CSV files\nSyncthing6 syncs the phone and the laptop\nthe laptop has a Python script that imports the CSV into SQLite to eliminate duplicates\nthe same script queries against Bing to get altitude information for missing altitudes\nas a final step, the script exports daily GPX files\non the laptop, GpsPrune helps me visualize and measure trips\nBackitude configuration\nThese are the modified setting properties:\nEnable backitude: yes\nSettings\nStandard Mode Settings\nTime Interval Selection: 1 minute\nLocation Polling Timeout: 5 minutes\nDisplay update message: no\n\nWifi Mode Settings\nWi-Fi Mode Enabled: yes\nTime Interval Options: 1 hour\nLocation Polling Timeout: 5 minutes\n\nUpdate Settings\nMinimum Change in Distance: 10 meters\n\nAccuracy Settings\nMinimum GPS accuracy: 12 meters\nMinimum Wi-Fi accuracy: 20 meters\n\nInternal Memory Storage Options\nKML and CSV\n\nDisplay Failure Notifications: no\n\nI have an exported preferences file available7.\nSyncthing\nThe syncthing configuration is optional; it could be simple done by manual transfers from the phone. It\u2019s also not the most simple thing to do, so I\u2019ll let the Syncting Documentation8 take care of describing the how-tos.\nPython script\nBefore jumping to the script, there are 3 Python modules it needs:\npip3 install --user arrow gpxpy requests\nAnd the script itself - please replace the INBASE, OUTBASE, and BINGKEY properties. To get a Bing key, visit Bing9.\nimport os\nimport sqlite3\nimport csv\nimport glob\nimport arrow\nimport re\nimport gpxpy.gpx\nimport requests\n\nINBASE=\"/path/to/your/syncthing/gps/files\"\nOUTBASE=\"/path/for/sqlite/and/gpx/output\"\nBINGKEY=\"get a bing maps key and insert it here\"\n\ndef parse(row):\n DATE = re.compile(\n r'^(?P<year>[0-9]{4})-(?P<month>[0-9]{2})-(?P<day>[0-9]{2})T'\n r'(?P<time>[0-9]{2}:[0-9]{2}:[0-9]{2})\\.(?P<subsec>[0-9]{3})Z$'\n )\n\n lat = row[0]\n lon = row[1]\n acc = row[2]\n alt = row[3]\n match = DATE.match(row[4])\n # in theory, arrow should have been able to parse the date, but I couldn't get\n # it working\n epoch = arrow.get(\"%s-%s-%s %s %s\" % (\n match.group('year'),\n match.group('month'),\n match.group('day'),\n match.group('time'),\n match.group('subsec')\n ), 'YYYY-MM-DD hh:mm:ss SSS').timestamp\n return(epoch,lat,lon,alt,acc)\n\ndef exists(db, epoch, lat, lon):\n return db.execute('''\n SELECT\n *\n FROM\n data\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n ''', (epoch, lat, lon)).fetchone()\n\ndef ins(db, epoch,lat,lon,alt,acc):\n if exists(db, epoch, lat, lon):\n return\n print('inserting data point with epoch %d' % (epoch))\n db.execute('''INSERT INTO data (epoch, latitude, longitude, altitude, accuracy) VALUES (?,?,?,?,?);''', (\n epoch,\n lat,\n lon,\n alt,\n acc\n ))\n\n\nif __name__ == '__main__':\n db = sqlite3.connect(os.path.join(OUTBASE, 'location-log.sqlite'))\n db.execute('PRAGMA auto_vacuum = INCREMENTAL;')\n db.execute('PRAGMA journal_mode = MEMORY;')\n db.execute('PRAGMA temp_store = MEMORY;')\n db.execute('PRAGMA locking_mode = NORMAL;')\n db.execute('PRAGMA synchronous = FULL;')\n db.execute('PRAGMA encoding = \"UTF-8\";')\n\n files = glob.glob(os.path.join(INBASE, '*.csv'))\n for logfile in files:\n with open(logfile) as csvfile:\n try:\n reader = csv.reader(csvfile)\n except Exception as e:\n print('failed to open CSV reader for file: %s; %s' % (logfile, e))\n continue\n # skip the first row, that's headers\n headers = next(reader, None)\n for row in reader:\n epoch,lat,lon,alt,acc = parse(row)\n ins(db,epoch,lat,lon,alt,acc)\n # there's no need to commit per line, per file should be safe enough\n db.commit()\n\n db.execute('PRAGMA auto_vacuum;')\n\n results = db.execute('''\n SELECT\n *\n FROM\n data\n ORDER BY epoch ASC''').fetchall()\n prevdate = None\n gpx = gpxpy.gpx.GPX()\n\n for epoch, lat, lon, alt, acc in results:\n # in case you know your altitude might actually be valid with negative\n # values you may want to remove the -10\n if alt == 'NULL' or alt < -10:\n url = \"http://dev.virtualearth.net/REST/v1/Elevation/List?points=%s,%s&key=%s\" % (\n lat,\n lon,\n BINGKEY\n )\n bing = requests.get(url).json()\n # gotta love enterprise API endpoints\n if not bing or \\\n 'resourceSets' not in bing or \\\n not len(bing['resourceSets']) or \\\n 'resources' not in bing['resourceSets'][0] or \\\n not len(bing['resourceSets'][0]) or \\\n 'elevations' not in bing['resourceSets'][0]['resources'][0] or \\\n not bing['resourceSets'][0]['resources'][0]['elevations']:\n alt = 0\n else:\n alt = float(bing['resourceSets'][0]['resources'][0]['elevations'][0])\n print('got altitude from bing: %s for %s,%s' % (alt,lat,lon))\n db.execute('''\n UPDATE\n data\n SET\n altitude = ?\n WHERE\n epoch = ?\n AND\n latitude = ?\n AND\n longitude = ?\n LIMIT 1\n ''',(alt, epoch, lat, lon))\n db.commit()\n del(bing)\n del(url)\n date = arrow.get(epoch).format('YYYY-MM-DD')\n if not prevdate or prevdate != date:\n # write previous out\n gpxfile = os.path.join(OUTBASE, \"%s.gpx\" % (date))\n with open(gpxfile, 'wt') as f:\n f.write(gpx.to_xml())\n print('created file: %s' % gpxfile)\n\n # create new\n gpx = gpxpy.gpx.GPX()\n prevdate = date\n\n # Create first track in our GPX:\n gpx_track = gpxpy.gpx.GPXTrack()\n gpx.tracks.append(gpx_track)\n\n # Create first segment in our GPX track:\n gpx_segment = gpxpy.gpx.GPXTrackSegment()\n gpx_track.segments.append(gpx_segment)\n\n # Create points:\n gpx_segment.points.append(\n gpxpy.gpx.GPXTrackPoint(\n lat,\n lon,\n elevation=alt,\n time=arrow.get(epoch).datetime\n )\n )\n\n db.close()\n\nOnce this is done, the OUTBASE directory will be populated by .gpx files, one per day.\nGpsPrune\nGpsPrune is a desktop, QT based GPX track visualizer. It needs data connectivity to have nice maps in the background, but it can do a lot of funky things, including editing GPX tracks.\nsudo apt install gpsprune\nKeep it in mind that the export script overwrites the GPX files, so the data needs to be fixed in the SQLite database.\nThis is an example screenshot of GpsPrune, about our 2 day walk down from Mount Emei and it\u2019s endless stairs:\n \n\nemei\nHappy tracking!\n\n\nhttps://www.traccar.org/\u21a9\nhttps://owntracks.org/\u21a9\nhttps://indieweb.org/manual_until_it_hurts\u21a9\nhttp://www.gpsies.com/backitude.do\u21a9\ngaugler.backitude.apk\u21a9\nhttps://syncthing.net/\u21a9\nbackitude.prefs\u21a9\nhttps://docs.syncthing.net/intro/getting-started.html\u21a9\nhttps://msdn.microsoft.com/en-us/library/ff428642\u21a9" }, "name": "GPS tracking without a server", "post-type": "article", "_id": "1124870", "_source": "268", "_is_read": true }
{ "type": "entry", "published": "2018-10-02T20:42:46+00:00", "url": "https://cleverdevil.io/2018/a-huge-congratulations-to-getsource-who-i", "category": [ "IndieWeb" ], "syndication": [ "https://twitter.com/cleverdevil/status/1047225419232161793" ], "content": { "text": "A huge congratulations to @GetSource, who I had the absolute pleasure of working with for 6+ years. Great hire for @GoDaddy. I hope you get a chance to work on #IndieWeb features for @WordPress! Best of luck. I'm sure you'll knock it out of the park. \ud83d\ude00\n\nhttps://twitter.com/GetSource/status/1047223678218383362", "html": "A huge congratulations to @GetSource, who I had the absolute pleasure of working with for 6+ years. Great hire for @GoDaddy. I hope you get a chance to work on <a href=\"https://cleverdevil.io/tag/IndieWeb\" class=\"p-category\">#IndieWeb</a> features for @WordPress! Best of luck. I'm sure you'll knock it out of the park. \ud83d\ude00<br /><br /><a href=\"https://twitter.com/GetSource/status/1047223678218383362\">https://twitter.com/GetSource/status/1047223678218383362</a>" }, "author": { "type": "card", "name": "Jonathan LaCour", "url": "https://cleverdevil.io/profile/cleverdevil", "photo": "https://aperture-proxy.p3k.io/77e5d6e5871324c43aebf2e3e7a5553e14578f66/68747470733a2f2f636c65766572646576696c2e696f2f66696c652f66646263373639366135663733383634656131316138323863383631653133382f7468756d622e6a7067" }, "post-type": "note", "_id": "1121855", "_source": "71", "_is_read": true }
Looking forward to another Homebrew Website Club Baltimore, tomorrow!
It’s an IndieWeb! Come learn some ways to free your content and your social sharing from the social networking silos!
{ "type": "entry", "published": "2018-10-02T16:27:40-04:00", "rsvp": "yes", "url": "https://martymcgui.re/2018/10/02/162740/", "syndication": [ "https://twitter.com/schmarty/status/1047222938947260416", "https://www.facebook.com/marty.mcguire.54/posts/10212976492508999" ], "in-reply-to": [ "https://martymcgui.re/2018/09/20/124541/" ], "content": { "text": "I'm going!Looking forward to another Homebrew Website Club Baltimore, tomorrow!\nIt\u2019s an IndieWeb! Come learn some ways to free your content and your social sharing from the social networking silos!", "html": "I'm going!<p>Looking forward to another Homebrew Website Club Baltimore, tomorrow!</p>\n<p>It\u2019s an <a href=\"https://indieweb.org/\">IndieWeb</a>! Come learn some ways to free your content and your social sharing from the social networking silos!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "rsvp", "refs": { "https://martymcgui.re/2018/09/20/124541/": { "type": "entry", "published": "2018-09-20T12:45:41-04:00", "summary": "Please note: We are meeting on Wednesday this week at 7:30pm. Be sure to double-check your calendars! Join us for an evening of quiet writing, IndieWeb demos, and discussions! Create or update your personal web site! Finish that blog post you\u2019ve been writing, edit the wiki! Demos of recent IndieWeb breakthroughs, share what you\u2019ve gotten working! Join a community with...", "url": "https://martymcgui.re/2018/09/20/124541/", "name": "Homebrew Website Club Baltimore", "author": { "type": "card", "name": "martymcgui.re", "url": "http://martymcgui.re", "photo": null }, "post-type": "article" } }, "_id": "1121266", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-10-02T15:51:50-04:00", "url": "https://martymcgui.re/2018/10/02/155150/", "category": [ "IndieWeb", "IWC", "IWCNYC", "site-update", "projects" ], "syndication": [ "https://twitter.com/schmarty/status/1047214824470528000", "https://www.facebook.com/marty.mcguire.54/posts/10212976373146015" ], "name": "Quick thoughts on project ideas from IndieWebCamp NYC 2018", "content": { "text": "I attended IndieWebCamp NYC 2018 and it was a blast! Check the schedule for links to notes and videos from the awesome keynotes, discussion sessions, and build-day demos. I am so grateful to all the other organizers, to all the new and familiar faces that came out, to those that joined us remotely, to Pace University's Seidenberg School for hosting us, and of course to the sponsors that made it all possible.\n \n\nI have a lot of thoughts about all the discussions and projects that were talked about, I'm sure. But for now, I'd like to capture some of the TODOs and project ideas that I came away with after the event, and the post-event discussions over food and drink.\n\n A Micropub Media Endpoint built on Neocities for storage and Glitch for handling uploads and metadata. It would allow folks to store 1GB of media files like photos, audio, and video for their websites, for free. It would be usable with all kinds of posting tools, no matter what backend you use for your site.\n (Hilarious?) bonus: that content would be available peer-to-peer over IPFS.\n \n Improve the IndieWeb Web Ring (\ud83d\udd78\ud83d\udc8d.ws) to automatically check whether members' sites link back using Webmention. (I managed to make a small but often-asked-for update to the site during IWC)\n \n Improve how my website handles all these check-in posts which are made when someone else checks me in on Swarm. I would like to show who checked me in, at least, if not some of their photos, or maybe even an embedded version of the post from their site.\n \n\n Keep doing the This Week in the IndieWeb podcast! I had been feeling some burnout about this and falling behind. It was so great to talk with folks who listen to it and rely on it to keep up to date with the goings-on in the community!\n Offer a hand with aaronpk's new social monster catching game, built on IndieWeb building blocks.\n Offer a hand with jgmac1106's idea to issue educational course achievements (badges) via IndieWeb building blocks.\n Work on closing down Camura, a photo-sharing social network I helped build during the awkward age after the first \"camera phones\" and before Facebook introduced \"Mobile Uploads\". It has over 100k photos and 50k comments from around 400 folks. I'd like to let it down gently, make sure people have access to those photos, and maybe even preserve some of the best moments of human connection in a public place.\n\n More generally: I think there's a really cool future where IndieWeb building blocks are available on free services like Glitch and Neocities. New folks should be able to register a domain and plug them together in an afternoon, with no coding, and get a website that supports posting all kinds of content and social interactions. All for the cost of a domain! And all with the ability to download their content and take it with them if these services change or they outgrow them. I already built some of this as a goof. The big challenges are simplifying the UX and documenting all of the steps to show folks what they will get and how to get it.\n \n\nOther fun / ridiculous ideas discussed over the weekend:\n\n Support Facebook-style colored-background posts like aaronpk did at IWC. I love the simplicity of adding an RGB color as a hashtag.\n \n\n \n \"This American Bachelor\" (working title only) - a dating site as a podcast. Each episode (or season??) is an NPR-style deep dive into the life and longings of a single person looking for love. Alternate title: \"Single\". The cocktail-driven discussion that produced this idea was a joy.\n \n\n\n I am sure there are fun ideas that were discussed that I am leaving out. If you can think of any, let me know!", "html": "<p>\n I attended <a href=\"https://indieweb.org/2018/NYC\">IndieWebCamp NYC 2018</a> and it was a blast! Check the <a href=\"https://indieweb.org/2018/NYC#Schedule\">schedule</a> for links to notes and videos from the awesome keynotes, discussion sessions, and build-day demos. I am so grateful to all the other <a href=\"https://indieweb.org/2018/NYC#Organizers\">organizers</a>, to all the new and familiar faces that came out, to those that joined us remotely, to <a href=\"https://www.pace.edu/seidenberg/\">Pace University's Seidenberg School</a> for hosting us, and of course to the <a href=\"https://indieweb.org/2018/NYC#Sponsors\">sponsors</a> that made it all possible.\n <br /></p>\n<p>I have a lot of thoughts about all the discussions and projects that were talked about, I'm sure. But for now, I'd like to capture some of the TODOs and project ideas that I came away with after the event, and the post-event discussions over food and drink.</p>\n<ul><li>\n A <a href=\"https://indieweb.org/media_endpoint\">Micropub Media Endpoint</a> built on <a href=\"https://neocities.org/\">Neocities</a> for storage and <a href=\"https://glitch.com/\">Glitch</a> for handling uploads and metadata. It would allow folks to store 1GB of media files like photos, audio, and video for their websites, for free. It would be usable with all kinds of <a href=\"https://indieweb.org/Micropub/Clients\">posting tools</a>, no matter what <a href=\"https://indieweb.org/Micropub/Servers\">backend</a> you use for your site.\n <ul><li>(Hilarious?) bonus: that content would be available <a href=\"https://blog.neocities.org/blog/2015/09/08/its-time-for-the-distributed-web.html\">peer-to-peer over IPFS</a>.</li>\n </ul></li>\n <li>Improve the <a href=\"https://indieweb.org/indiewebring\">IndieWeb Web Ring</a> (<a href=\"https://xn--sr8hvo.ws/\">\ud83d\udd78\ud83d\udc8d.ws</a>) to automatically check whether members' sites link back using <a href=\"https://indieweb.org/Webmention\">Webmention</a>. (I managed to make a <a href=\"https://martymcgui.re/2018/09/29/114553/\">small but often-asked-for update</a> to the site during IWC)</li>\n <li>\n Improve how my website handles all <a href=\"https://martymcgui.re/2018/09/29/160439/\">these</a> <a href=\"https://martymcgui.re/2018/09/29/161113/\">check-in</a> <a href=\"https://martymcgui.re/2018/09/29/182249/\">posts</a> which are made when someone else checks me in on <a href=\"https://www.swarmapp.com/\">Swarm</a>. I would like to show who checked me in, at least, if not some of their photos, or maybe even an embedded version of the post from their site.\n <br /></li>\n <li>Keep doing the <a href=\"https://martymcgui.re/podcasts/indieweb/\">This Week in the IndieWeb podcast</a>! I had been feeling some burnout about this and falling behind. It was so great to talk with folks who listen to it and rely on it to keep up to date with the goings-on in the community!</li>\n <li>Offer a hand with <a href=\"https://aaronparecki.com/\">aaronpk's</a> new <a href=\"https://monstr.space/\">social monster catching game</a>, built on IndieWeb building blocks.</li>\n <li>Offer a hand with <a href=\"http://jgregorymcverry.com/\">jgmac1106's</a> <a href=\"http://jgregorymcverry.com/my-goals-for-indiewebcamp-nyc-openbadges-endorsement-at-the-dns-level/\">idea</a> to issue educational course achievements (<a href=\"http://jgregorymcverry.com/webmention-badges-discussion-across-networks-after-indiewebcamp-nyc-session/\">badges</a>) via IndieWeb building blocks.</li>\n <li>Work on closing down <a href=\"https://camura.com/\">Camura</a>, a photo-sharing social network I helped build during the awkward age after the first \"camera phones\" and before Facebook introduced \"Mobile Uploads\". It has over 100k photos and 50k comments from around 400 folks. I'd like to let it down gently, make sure people have access to those photos, and maybe even preserve some of the best moments of human connection in a public place.</li>\n</ul><p>\n More generally: I think there's a <i>really cool</i> future where <a href=\"https://indieweb.org/Category:building-blocks\">IndieWeb building blocks</a> are available on free services like Glitch and Neocities. New folks should be able to register a domain and plug them together in an afternoon, with no coding, and get a website that supports posting all kinds of content and social interactions. All for the cost of a domain! And all with the ability to download their content and take it with them if these services change or they outgrow them. I <a href=\"https://martymcgui.re/2018/03/12/130455/\">already built some of this</a> as a goof. The big challenges are simplifying the UX and documenting all of the steps to show folks what they will get and how to get it.\n <br /></p>\n<p>Other fun / ridiculous ideas discussed over the weekend:</p>\n<ul><li>\n Support Facebook-style colored-background posts <a href=\"https://martymcgui.re/2018/09/30/113226/\">like aaronpk did at IWC</a>. I love the simplicity of adding an RGB color as a hashtag.\n <br /></li>\n <li>\n \"This American Bachelor\" (working title only) - a dating site as a podcast. Each episode (or season??) is an NPR-style deep dive into the life and longings of a single person looking for love. Alternate title: \"Single\". The cocktail-driven discussion that produced this idea was a joy.\n <br /></li>\n</ul><p>\n I am sure there are fun ideas that were discussed that I am leaving out. If you can think of any, let me know!\n <br /></p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "article", "_id": "1121001", "_source": "175", "_is_read": true }
Definitely! It’s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.
{ "type": "entry", "published": "2018-10-01T22:57:55-04:00", "summary": "Definitely! It\u2019s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.", "url": "https://eddiehinkle.com/2018/10/01/26/reply/", "in-reply-to": [ "https://jj.isgeek.net/2018/10/02-123939-am/" ], "content": { "text": "Definitely! It\u2019s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.", "html": "<p>Definitely! It\u2019s a great idea. In fact, a couple of us in the IndieWeb chat have actually done some brainstorming and two people have worked on some code for that stuff.</p>" }, "author": { "type": "card", "name": "Eddie Hinkle", "url": "https://eddiehinkle.com/", "photo": "https://aperture-proxy.p3k.io/cc9591b69c2c835fa2c6e23745b224db4b4b431f/68747470733a2f2f656464696568696e6b6c652e636f6d2f696d616765732f70726f66696c652e6a7067" }, "post-type": "reply", "refs": { "https://jj.isgeek.net/2018/10/02-123939-am/": { "type": "entry", "url": "https://jj.isgeek.net/2018/10/02-123939-am/", "name": "https://jj.isgeek.net/2018/10/02-123939-am/", "post-type": "article" } }, "_id": "1118300", "_source": "226", "_is_read": true }
{ "type": "entry", "published": "2018-10-01 18:12-0700", "url": "http://tantek.com/2018/274/t3/undo-indiewebcamp-open-design", "category": [ "Undo" ], "content": { "text": "This past Friday I led a session on #Undo @IndieWebCamp NYC.\n\nI\u2019ve wanted Undo in my posting UI (like Gmail undo send) since I started @Falcon in 2009. Decided it\u2019s time to open up all my design thinking.\nSession: https://indieweb.org/2018/NYC/undo\nDesign: https://indieweb.org/undo\n\nSketches and more to follow. Open sourcing my undo design work because I want to help enable it everywhere. I have a theory that \"Undo\" in posting UIs may help improve online conversation dynamics.", "html": "This past Friday I led a session on #<span class=\"p-category\">Undo</span> <a class=\"h-cassis-username\" href=\"https://twitter.com/IndieWebCamp\">@IndieWebCamp</a> NYC.<br /><br />I\u2019ve wanted Undo in my posting UI (like Gmail undo send) since I started <a class=\"h-cassis-username\" href=\"https://twitter.com/Falcon\">@Falcon</a> in 2009. Decided it\u2019s time to open up all my design thinking.<br />Session: <a href=\"https://indieweb.org/2018/NYC/undo\">https://indieweb.org/2018/NYC/undo</a><br />Design: <a href=\"https://indieweb.org/undo\">https://indieweb.org/undo</a><br /><br />Sketches and more to follow. Open sourcing my undo design work because I want to help enable it everywhere. I have a theory that \"Undo\" in posting UIs may help improve online conversation dynamics." }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "note", "_id": "1116282", "_source": "1", "_is_read": true }
{ "type": "entry", "published": "2018-10-01 17:37-0700", "url": "http://tantek.com/2018/274/t2/vcard4-hcard-most-interop", "category": [ "PortableContacts", "vCard4", "hcard" ], "in-reply-to": [ "https://twitter.com/chrismessina/status/1046569740892688384" ], "content": { "text": "@chrismessina FOAF was unnecessary reinvention of vCard, still is.\n#PortableContacts bad news, now zombie site https://indieweb.org/Portable_Contacts\nXFN still here, mostly rel=me; Mastodon added support.\n#vCard4 #hcard have most interop across devices apps sites: http://microformats.org/wiki/h-card", "html": "<a class=\"h-cassis-username\" href=\"https://twitter.com/chrismessina\">@chrismessina</a> FOAF was unnecessary reinvention of vCard, still is.<br />#<span class=\"p-category\">PortableContacts</span> bad news, now zombie site <a href=\"https://indieweb.org/Portable_Contacts\">https://indieweb.org/Portable_Contacts</a><br />XFN still here, mostly rel=me; Mastodon added support.<br />#<span class=\"p-category\">vCard4</span> #<span class=\"p-category\">hcard</span> have most interop across devices apps sites: <a href=\"http://microformats.org/wiki/h-card\">http://microformats.org/wiki/h-card</a>" }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "reply", "refs": { "https://twitter.com/chrismessina/status/1046569740892688384": { "type": "entry", "url": "https://twitter.com/chrismessina/status/1046569740892688384", "name": "@chrismessina\u2019s tweet", "post-type": "article" } }, "_id": "1116283", "_source": "1", "_is_read": true }
{ "type": "entry", "published": "2018-10-01 15:47-0700", "url": "http://tantek.com/2018/274/t1/indiewebcamp-nyc-photos-notes-posted", "category": [ "undo", "readers", "notifications", "learntobuild", "dataportability", "buildingblocks", "badges", "activitypub:" ], "content": { "text": "Good times @IndieWebCamp NYC! Huge thanks to host @PaceUniversity & organizers @jgmac1106 @schmarty @dshanske!\nPhotos etc: https://indieweb.org/2018/NYC\nSession notes posted: #undo #readers #notifications #learntobuild #dataportability #buildingblocks #badges #activitypub: https://indieweb.org/2018/NYC/Sessions", "html": "Good times <a class=\"h-cassis-username\" href=\"https://twitter.com/IndieWebCamp\">@IndieWebCamp</a> NYC! Huge thanks to host <a class=\"h-cassis-username\" href=\"https://twitter.com/PaceUniversity\">@PaceUniversity</a> & organizers <a class=\"h-cassis-username\" href=\"https://twitter.com/jgmac1106\">@jgmac1106</a> <a class=\"h-cassis-username\" href=\"https://twitter.com/schmarty\">@schmarty</a> <a class=\"h-cassis-username\" href=\"https://twitter.com/dshanske\">@dshanske</a>!<br />Photos etc: <a href=\"https://indieweb.org/2018/NYC\">https://indieweb.org/2018/NYC</a><br />Session notes posted: #<span class=\"p-category\">undo</span> #<span class=\"p-category\">readers</span> #<span class=\"p-category\">notifications</span> #<span class=\"p-category\">learntobuild</span> #<span class=\"p-category\">dataportability</span> #<span class=\"p-category\">buildingblocks</span> #<span class=\"p-category\">badges</span> #<span class=\"p-category\">activitypub:</span> <a href=\"https://indieweb.org/2018/NYC/Sessions\">https://indieweb.org/2018/NYC/Sessions</a>" }, "author": { "type": "card", "name": "Tantek \u00c7elik", "url": "http://tantek.com/", "photo": "https://aperture-media.p3k.io/tantek.com/acfddd7d8b2c8cf8aa163651432cc1ec7eb8ec2f881942dca963d305eeaaa6b8.jpg" }, "post-type": "note", "_id": "1116284", "_source": "1", "_is_read": true }
Homebrew Website Club is this Wednesday, 6:30pm at Mozart’s Coffee. If the weather’s nice we’ll meet outside. I’m catching up on videos from IndieWebCamp NYC so I can summarize that event for the Austin group.
{ "type": "entry", "author": { "name": null, "url": "https://www.manton.org/", "photo": null }, "url": "https://www.manton.org/2018/10/01/185056.html", "content": { "html": "<p>Homebrew Website Club is this Wednesday, 6:30pm at Mozart\u2019s Coffee. If the weather\u2019s nice we\u2019ll meet outside. I\u2019m catching up on videos from IndieWebCamp NYC so I can summarize that event for the Austin group.</p>", "text": "Homebrew Website Club is this Wednesday, 6:30pm at Mozart\u2019s Coffee. If the weather\u2019s nice we\u2019ll meet outside. I\u2019m catching up on videos from IndieWebCamp NYC so I can summarize that event for the Austin group." }, "published": "2018-10-01T13:50:56-05:00", "post-type": "note", "_id": "1115796", "_source": "12", "_is_read": true }
{ "type": "entry", "published": "2018-10-01T19:44:52-04:00", "url": "https://martymcgui.re/2018/10/01/194452/", "category": [ "podcast", "IndieWeb" ], "audio": [ "https://aperture-proxy.p3k.io/0bf739ddb7a5eb104a3facffd82adac0a1b9ce5c/68747470733a2f2f6d656469612e6d617274796d636775692e72652f66342f61362f66332f36352f38383637653637386537313566343736306433323734323239303336303630393333373664636533303461333963303834633730363664612e6d7033" ], "syndication": [ "https://huffduffer.com/schmarty/504740", "https://twitter.com/schmarty/status/1046909525926842368", "https://www.facebook.com/marty.mcguire.54/posts/10212971376261096" ], "name": "This Week in the IndieWeb Audio Edition \u2022 September 15th - 21st, 2018", "content": { "text": "Show/Hide Transcript \n \n Another late one but a great one. Mastodon adds rel-me, geocaching with WordPress, and Path ends their incredible journey. It\u2019s the audio edition for This Week in the IndieWeb for September 15th - 21st, 2018.\n\nYou can find all of my audio editions and subscribe with your favorite podcast app here: martymcgui.re/podcasts/indieweb/.\n\nMusic from Aaron Parecki\u2019s 100DaysOfMusic project: Day 85 - Suit, Day 48 - Glitch, Day 49 - Floating, Day 9, and Day 11\n\nThanks to everyone in the IndieWeb chat for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!", "html": "Show/Hide Transcript \n \n <p>Another late one but a great one. Mastodon adds rel-me, geocaching with WordPress, and Path ends their incredible journey. It\u2019s the audio edition for <a href=\"https://indieweb.org/this-week/2018-09-21.html\">This Week in the IndieWeb for September 15th - 21st, 2018</a>.</p>\n\n<p>You can find all of my audio editions and subscribe with your favorite podcast app here: <a href=\"https://martymcgui.re/podcasts/indieweb/\">martymcgui.re/podcasts/indieweb/</a>.</p>\n\n<p>Music from <a href=\"https://aaronparecki.com/\">Aaron Parecki</a>\u2019s <a href=\"https://100.aaronparecki.com/\">100DaysOfMusic project</a>: <a href=\"https://aaronparecki.com/2017/03/15/14/day85\">Day 85 - Suit</a>, <a href=\"https://aaronparecki.com/2017/02/06/7/day48\">Day 48 - Glitch</a>, <a href=\"https://aaronparecki.com/2017/02/07/4/day49\">Day 49 - Floating</a>, <a href=\"https://aaronparecki.com/2016/12/29/21/day-9\">Day 9</a>, and <a href=\"https://aaronparecki.com/2016/12/31/15/\">Day 11</a></p>\n\n<p>Thanks to everyone in the <a href=\"https://chat.indieweb.org/\">IndieWeb chat</a> for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "audio", "_id": "1115634", "_source": "175", "_is_read": true }
{ "type": "entry", "published": "2018-10-01T17:28:24-04:00", "url": "https://martymcgui.re/2018/10/01/172824/", "category": [ "podcast", "IndieWeb", "this-week-indieweb-podcast" ], "audio": [ "https://aperture-proxy.p3k.io/ce01ca4ce7c36e202a19e36cb6198df661841769/68747470733a2f2f6d656469612e6d617274796d636775692e72652f39642f33362f33622f36382f65393332356361623430383137333737663438663163353932393733353565636435643732616661613133393431333038393233386166342e6d7033" ], "syndication": [ "https://huffduffer.com/schmarty/504728", "https://twitter.com/schmarty/status/1046875366869147648", "https://www.facebook.com/marty.mcguire.54/posts/10212970840327698" ], "name": "This Week in the IndieWeb Audio Edition \u2022 September 8th - 14th, 2018", "content": { "text": "Show/Hide Transcript \n \n Two weeks late but better than never! Pronoun buttons, a class on IndieWeb, and a Google takeover of the web. It\u2019s the audio edition for This Week in the IndieWeb for September 8th - 14th, 2018.\n\nYou can find all of my audio editions and subscribe with your favorite podcast app here: martymcgui.re/podcasts/indieweb/.\n\nMusic from Aaron Parecki\u2019s 100DaysOfMusic project: Day 85 - Suit, Day 48 - Glitch, Day 49 - Floating, Day 9, and Day 11\n\nThanks to everyone in the IndieWeb chat for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!", "html": "Show/Hide Transcript \n \n <p>Two weeks late but better than never! Pronoun buttons, a class on IndieWeb, and a Google takeover of the web. It\u2019s the audio edition for <a href=\"https://indieweb.org/this-week/2018-09-14.html\">This Week in the IndieWeb for September 8th - 14th, 2018</a>.</p>\n\n<p>You can find all of my audio editions and subscribe with your favorite podcast app here: <a href=\"https://martymcgui.re/podcasts/indieweb/\">martymcgui.re/podcasts/indieweb/</a>.</p>\n\n<p>Music from <a href=\"https://aaronparecki.com/\">Aaron Parecki</a>\u2019s <a href=\"https://100.aaronparecki.com/\">100DaysOfMusic project</a>: <a href=\"https://aaronparecki.com/2017/03/15/14/day85\">Day 85 - Suit</a>, <a href=\"https://aaronparecki.com/2017/02/06/7/day48\">Day 48 - Glitch</a>, <a href=\"https://aaronparecki.com/2017/02/07/4/day49\">Day 49 - Floating</a>, <a href=\"https://aaronparecki.com/2016/12/29/21/day-9\">Day 9</a>, and <a href=\"https://aaronparecki.com/2016/12/31/15/\">Day 11</a></p>\n\n<p>Thanks to everyone in the <a href=\"https://chat.indieweb.org/\">IndieWeb chat</a> for their feedback and suggestions. Please drop me a note if there are any changes you\u2019d like to see for this audio edition!</p>" }, "author": { "type": "card", "name": "Marty McGuire", "url": false, "photo": "https://aperture-proxy.p3k.io/8275f85e3a389bd0ae69f209683436fc53d8bad9/68747470733a2f2f6d617274796d636775692e72652f696d616765732f6c6f676f2e6a7067" }, "post-type": "audio", "_id": "1114950", "_source": "175", "_is_read": true }