Archiving Stuff for Offline Use

by
, posted

Despite the ‘everything on the internet is permanent’ discourse that has been drilled into our heads since childhood, the opposite is true: nothing is permanent on the internet.

Every now & then, I come across dead links on the web, in my bookmarks, in my wiki & so on.

I sometimes check back into saved links, say a certain blog post & it comes as no surprise that the link would be down, whether because that domain has expired or the content has been removed or censored. Unfortunately, there’s often nothing we can do about it.

Everything we see exists on a server. Even if the server is “virtual”, it exists on a real hard drive somewhere. That means anything can happen, from a natural disaster to a genuine mistake.

The site content can be locked behind paywall or the owner might be breathing his last, the governments or copyright holders might have issues with the site, we can count ‘n’ no of reasons for why would a site won’t last on internet.

Kiwix.org

Kiwix.org is a non-profit organization that provides free and open-source software for offline access to web content. It is an offline reader for the content like Wikipedia, Project Gutenberg, TED Talks, Crash course, Wiktionary & so on.

It is available as an application for all mainstream operating systems. The files supported by Kiwix come in highly compressed format .zim

You can import zim archives into your Kiwix client easily. Simply launch your Kiwix app and click on open files and slect your zim archive and you are done. Here are some useful zim archives that you might find useful.

library.kiwix.org - official kiwix library

Kiwix will enable you to browse all these collections offline without any network connectivity. You can save the zims into a portable SSD or a thumb drive & access the offline version of useful web on the go anytime.

HTTrack

HTTrack is a free and open-source Web crawler and offline browser. It allows users to download a World Wide website from the Internet to a local directory, building recursively all directories, getting html, images, and other files from the server to user’s local computer.

The speical thing that makes HTTrack stand out it it’s ability to maintain the original site’s relative link-structure. You can browse the site dumped with HTTrack from link to link, as if you were viewing it online.

It’s extremely easy to save the offline version of a website using HTTrack, especially on Unix based OS.

On Ubuntu run the below commands to install & archive the given website for offline access. Preferably create a separate directory to dump your desired website.

sudo apt-get install httrack 

httrack https://wiredtoolkit.netlify.app

On MacOS run

brew install httrack

httrack https://wiredtoolkit.netlify.app

Online Archiving

I don’t see a reason for not indexing tools to archive websites over internet itself. So for online archiving, we have:

That is all about this post. If you are familiar with better archiving tools or methods please let me know.

You can reply via mail

Your Signature