This week on Fern River Club, two provocative visitors arrive in Fern River, arousing a flurry of curious speculation.

Lovers of the sensual aura of celebrity, sexy performance art, and private moments experienced in public spaces will find their appetites whetted.

https://fernriver.club/scenes/visitors/

#FernRiverClub #erotica #EroticStory #EroticFiction #SpeculativeFiction #DigitalGarden #indieweb #PerformanceArt #SocialNudity #EroticArt #celebrity #Amsterdam #Venice #SkinnyDip

Since Google Zero comes, ActivityPub must become the new web, not only a social media. Otherwise we will lose everything we tried to restore out freedom.

#google #ai #gemini #activitypub #fediverse #web #socialmedia #freedom #indieweb #scrapers

@annika Been thinking aloud in #IndieWeb circles about how to handle my umpteen links from delicious and pinboard and twiter days. 👀

Yesterday I proposed the idea of a “minimum interesting service worker” that could provide a link (or links) to archives or mirrors when your site was unavailable as one possible solution to the desire to make personal #indieweb sites more reliable by providing at least a user path to “soft repair” links to your site that may otherwise seem broken.

Minimum because it only requires two files and one line of script in site footer template, and interesting because it provides both a novel user benefit and personal site publisher benefits.

The idea occurred to me during an informal coffee chat over Zoom with a couple of other Indieweb community folks yesterday, and afterwards I braindumped a bit into the IndieWeb Developers Chat channelš. Figured it was worth writing up rather than waiting to implement it.

Basic idea:

You have a service worker (and “offline” HTML page) on your personal site, installed from any page on your site, that all it does is cache the offline page, and on future requests to your site checks to see if the requested page is available, and if so serves it, otherwise it displays your offline page with a “site appears to be unreachable” message that a lot of service workers provide, AND provides an algorithmically constructed link to the page on an archive (e.g. Internet Archive) or static mirror of your site (typically at another domain).

This is minimal because it requires only two files: your service worker (a JS file) and your offline page (a minimal self-contained static HTML file with inline CSS). Doable in <1k bytes of code, with no additional local caching or storage requirements, thus a negligible impact on site visitors (likely less than the cookies that major sites store).

User benefit:

If someone has ever visited your personal site, then in the future whenever they click a link to your pages or posts, if your site/domain is unavailable for any reason, then the reader would see a notice (from your offline page) and a link to view an archive/mirror copy instead, thus providing a one-click ability for the reader to “soft-repair” any otherwise apparently broken links to your site.

Personal site publisher benefits:

Having such a service worker that automatically provides your readers links to where they can view your content on an archive or mirror means you can go on vacation or otherwise step away from your personal site, knowing that if it does go down, (at least prior) site visitors will still have a way to click-through and view your published content.

Additional enhancements:

Ideally any archive or mirror copies would use rel=canonical to link back to the page on your domain, so any crawlers or search engines could automatically prefer your original page, or browsers could offer the user a choice to “View original”. You can do that by including a rel=canonical link in all your original pages, so when they are archived or mirrored, those copies automatically include a rel=canonical link back to your original page or post.

The simplest implementation would be to ping the Internet Archive to save² your page or post upon publishing it. You could also add code to your site to explicitly generate a static mirror of your pages, perhaps with an SSG or crawler like Spiderpig, to a GitHub repo, which is then auto-served as GitHub static pages, perhaps on its own domain yet at the same paths as your original pages (to make it trivial to generate such mirror links automatically).

If you’re using links to the Internet Archive, you can generate them automatically by prefixing your page URL with https://web.archive.org/web/*/ e.g. this post:

https://web.archive.org/web/*/https://tantek.com/2024/151/t1/minimum-interesting-service-worker

Possible generic library:

It may be possible to write this minimum interesting service worker (e.g. misv.js) as a generic (rather than site-specific) service worker that literally anyone with a personal site could “install” as is (a JS file, an HTML file, and a one-line script tag in their site-wide footer) and it would figure everything out from the context it is running in, unchanged (zero configuration necessary).


This is post 14 of #100PostsOfIndieWeb. #100Posts

← https://tantek.com/2024/072/t1/created-at-indiewebcamp-brighton
→ 🔮


Post glossary:

GitHub static pages
  https://indieweb.org/GitHub_Pages
HTML
  https://indieweb.org/HTML
JS
  https://indieweb.org/js
rel-canonical
  https://indieweb.org/rel-canonical
service worker
  https://indieweb.org/service_worker
Spiderpig
  https://indieweb.org/Spiderpig
SSG
  https://indieweb.org/SSG

 
References:

š https://chat.indieweb.org/dev/2024-05-29#t1717006352142600
² https://indieweb.org/Internet_Archive#Trigger_an_Archive
#indieweb #100PostsOfIndieWeb #100Posts

I will be at @xoxo this year! In 2019 one great thing I got more into was the IndieWeb. There are nascent plans for a IWC (camp) in the days prior. I'm not 100% for that, but I do plan to be part of an #XOXOFest #IndieWeb coffee/breakfast/assembly. And you bet I'll be hitting Billy Galaxy for some 🤖 shopping.

Yesterday I proposed the idea of a “minimum interesting service worker” that could provide a link (or links) to archives or mirrors when your site was unavailable as one possible solution to the desire to make personal #indieweb sites more reliable by providing at least a user path to “soft repair” links to your site that may otherwise seem broken.

Minimum because it only requires two files and one line of script in site footer template, and interesting because it provides both a novel user benefit and personal site publisher benefits.

The idea occurred to me during an informal coffee chat over Zoom with a couple of other Indieweb community folks yesterday, and afterwards I braindumped a bit into the IndieWeb Developers Chat channelš. Figured it was worth writing up rather than waiting to implement it.

Basic idea:

You have a service worker (and “offline” HTML page) on your personal site, installed from any page on your site, that all it does is cache the offline page, and on future requests to your site checks to see if the requested page is available, and if so serves it, otherwise it displays your offline page with a “site appears to be unreachable” message that a lot of service workers provide, AND provides an algorithmically constructed link to the page on an archive (e.g. Internet Archive) or static mirror of your site (typically at another domain).

This is minimal because it requires only two files: your service worker (a JS file) and your offline page (a minimal self-contained static HTML file with inline CSS). Doable in <1k bytes of code, with no additional local caching or storage requirements, thus a negligible impact on site visitors (likely less than the cookies that major sites store).

User benefit:

If someone has ever visited your personal site, then in the future whenever they click a link to your pages or posts, if your site/domain is unavailable for any reason, then the reader would see a notice (from your offline page) and a link to view an archive/mirror copy instead, thus providing a one-click ability for the reader to “soft-repair” any otherwise apparently broken links to your site.

Personal site publisher benefits:

Having such a service worker that automatically provides your readers links to where they can view your content on an archive or mirror means you can go on vacation or otherwise step away from your personal site, knowing that if it does go down, (at least prior) site visitors will still have a way to click-through and view your published content.

Additional enhancements:

Ideally any archive or mirror copies would use rel=canonical to link back to the page on your domain, so any crawlers or search engines could automatically prefer your original page, or browsers could offer the user a choice to “View original”. You can do that by including a rel=canonical link in all your original pages, so when they are archived or mirrored, those copies automatically include a rel=canonical link back to your original page or post.

The simplest implementation would be to ping the Internet Archive to save² your page or post upon publishing it. You could also add code to your site to explicitly generate a static mirror of your pages, perhaps with an SSG or crawler like Spiderpig, to a GitHub repo, which is then auto-served as GitHub static pages, perhaps on its own domain yet at the same paths as your original pages (to make it trivial to generate such mirror links automatically).

If you’re using links to the Internet Archive, you can generate them automatically by prefixing your page URL with https://web.archive.org/web/*/ e.g. this post:

https://web.archive.org/web/*/https://tantek.com/2024/151/t1/minimum-interesting-service-worker

Possible generic library:

It may be possible to write this minimum interesting service worker (e.g. misv.js) as a generic (rather than site-specific) service worker that literally anyone with a personal site could “install” as is (a JS file, an HTML file, and a one-line script tag in their site-wide footer) and it would figure everything out from the context it is running in, unchanged (zero configuration necessary).


This is post 14 of #100PostsOfIndieWeb. #100Posts

← https://tantek.com/2024/072/t1/created-at-indiewebcamp-brighton
→ 🔮


Post glossary:

GitHub static pages
  https://indieweb.org/GitHub_Pages
HTML
  https://indieweb.org/HTML
JS
  https://indieweb.org/js
rel-canonical
  https://indieweb.org/rel-canonical
service worker
  https://indieweb.org/service_worker
Spiderpig
  https://indieweb.org/Spiderpig
SSG
  https://indieweb.org/SSG

 
References:

š https://chat.indieweb.org/dev/2024-05-29#t1717006352142600
² https://indieweb.org/Internet_Archive#Trigger_an_Archive

Yesterday I proposed the idea of a âminimum interesting service workerâ that could provide a link (or links) to archives or mirrors when your site was unavailable as one possible solution to the desire to make personal #indieweb sites more reliable by providing at least a user path to âsoft repairâ... tantek.com

Today marks 5 years since I published the first post on shellsharks.com. To mark the occasion, I wrote a little note on my site about the blogging/site-having journey thus far.

https://shellsharks.com/notes/2024/05/30/5-years

Thanks to everyone who has bothered to read anything I've written and to those who have reached out to me over the years to give feedback or tell me they liked something I've put out there. 🧡

#blogging #indieweb #weblogpomo2024

I wrote about the most important possession I own on the web: my permadomain.

https://rscottjones.com/my-permadomain/

You should have one, too.

[26/31] for #WeblogPoMo2024
[28/100] for #100DaystoOffload
#indieweb #personalweb #blogging

Very much enjoying reading through the thoughts of Manu at https://manuelmoreale.com recently. Highly recommend looking through his posts archive if you find yourself in search of reading material.

#smallweb #indieweb #blogroll

I got a basic #rss feed wired up for my site. It's just titles and links at the moment. Full content is the next step.

#IndieWeb

https://www.alanwsmith.com/feeds/full.xml

@elecharny
Jette un œil à https://smolweb.org/ et tu verras que c'est toujours possible ;-)

quelques hashtags Ă  suivre aussi :

#indieweb #smallweb #smolweb #smolnet

As the #SlashPages are now the hot thing in #IndieWeb, I'd like to propose a /why page.

It could explain the site author's reasons for doing what they do, their values and philosophies. Mine is currently a redirect to a blog post but I've been meaning to rewrite it into a proper /why page in the near future.

http://hamatti.org/why

Unbelievable, but I've officially started to re-build from scratch my personal website.

With @astro

#indieWeb #webDev #blog

now integrating @mirlo releases into items in the [[Discography]] and adding new top-level links for various recently-added platforms (hi faircamp 👋 )

#tiddlywiki #digitalgarden #indieweb #gavloud #composer #platform #mirlo #faircamp

I'm excited to announce nanosearch, a Python library that lets you create a search engine in a few lines of code.

I designed this for use in creating tiny search engines as reference in my technical writing.

Learn how to use the tool on my blog:

https://jamesg.blog/2024/05/29/nanosearch/

#search #programming #websearch #making #indieweb

Helpful post from The Fediverse Report about Farcaster and they’re wild $1 billion valuable. They’re as valuable as Instagram when bought by Facebook, really? Some neat ideas buried in there, like the frame mini apps, but I’m having trouble seeing where this goes.