This Git Repo contains an Archive of My Bookmarks, using
Tim Schuster e62770ecfb
Updated bookmarks
9 months ago
bookmarks Updated bookmarks 9 months ago
dbs Updated bookmarks 9 months ago
examples minor url fixes and refactoring 10 months ago
templates show full bookmarked time in tooltip 10 months ago
.gitignore Updated .gitignore 10 months ago Create 11 months ago
LICENSE Initial commit 1 year ago Merge pull request #42 from nodiscc/patch-2 9 months ago split into multiple files, refactor config system 10 months ago dont hide real exceptions 10 months ago Updated bookmarks 9 months ago Removed debug print statement 10 months ago colorize output and add progress bar 10 months ago show full bookmarked time in tooltip 10 months ago
screenshot.png Archived Bookmarks 11 months ago upgrade chrome in 10 months ago

Bookmark Archiver Github Stars Twitter URL

▶️ Quickstart | Details | Configuration | Manual Setup | Troubleshooting | Changelog

Save an archived copy of all websites you bookmark (the actual content of the sites, not just the list of bookmarks).

Outputs browsable static html archives of each site, a PDF, a screenshot, and a link to a copy on, all indexed in a nice html file.
(Your own personal Way-Back Machine) DEMO:

Supports: Browser Bookmarks (Chrome, Firefox, Safari, IE, Opera), Pocket, Pinboard, Shaarli, Delicious, Instapaper,, and more!


1. Get your bookmarks:

Follow the links here to find instructions for exporting bookmarks from each service.

(If any of these links are broken, please submit an issue and I'll fix it)

2. Create your archive:

git clone
cd bookmark-archiver/
./ #install ALL dependencies
./ ~/Downloads/bookmark_export.html   # replace with the path to your export file from step 1

You can open service/index.html to view your archive. (favicons will appear next to each title once it has finished downloading)

If you have any trouble, see the Troubleshooting section at the bottom.
If you'd like to customize options, see the Configuration section.

If you want something easier than running programs in the command-line, take a look at Pocket Premium (yay Mozilla!) and Pinboard Pro, which both offer easy-to-use bookmark archiving with full-text-search.

Details is a script that takes a Pocket-format, Pinboard-format, or Netscape-format bookmark export file, and downloads a clone of each linked website to turn into a browsable archive that you can store locally or host online.

The archiver produces a folder like pocket/ containing an index.html, and archived copies of all the sites, organized by starred timestamp. It's Powered by the headless Chromium and good 'ol wget.

For each sites it saves:

  • wget of site, e.g. with .html appended if not present
  • screenshot.png 1440x900 screenshot of site using headless chrome
  • output.pdf Printed PDF of site using headless chrome
  • A link to the saved site on
  • link.json A json file containing link info and archive status
  • audio/ and video/ for sites like youtube, soundcloud, etc. (using youtube-dl) (WIP)

Wget and Chrome don't work on sites you need to be logged into (yet). chrome --headless essentially runs in an incognito mode session, until they add support for --user-data-dir=.

Large Exports & Estimated Runtime:

I've found it takes about an hour to download 1000 articles, and they'll take up roughly 1GB.
Those numbers are from running it single-threaded on my i5 machine with 50mbps down. YMMV.

You can run it in parallel by using the resume feature, or by manually splitting export.html into multiple files:

./ export.html bookmarks 1498800000 &  # third argument is timestamp to resume downloading from
./ export.html bookmarks 1498810000 &
./ export.html bookmarks 1498820000 &
./ export.html bookmarks 1498830000 &

Users have reported running it with 50k+ bookmarks with success (though it will take more RAM while running).


You can tweak parameters via environment variables, or by editing directly:

env CHROME_BINARY=google-chrome-stable RESOLUTION=1440,900 FETCH_PDF=False ./ ~/Downloads/bookmarks_export.html

Shell Options:

  • colorize console ouput: USE_COLOR value: [True]/False
  • show progress bar: SHOW_PROGRESS value: [True]/False
  • archive permissions: ARCHIVE_PERMISSIONS values: [755]/644/...

Dependency Options:

  • path to Chrome: CHROME_BINARY values: [chromium-browser]//usr/local/bin/google-chrome/...
  • path to wget: WGET_BINARY values: [wget]//usr/local/bin/wget/...

Archive Options:

  • maximum allowed download time per link: TIMEOUT values: [60]/30/...
  • archive methods (values: [True]/False):
    • fetch page with wget: FETCH_WGET
    • fetch images/css/js with wget: FETCH_WGET_REQUISITES (True is highly recommended)
    • print page as PDF: FETCH_PDF
    • fetch a screenshot of the page: FETCH_SCREENSHOT
    • fetch a favicon for the page: FETCH_FAVICON
    • submit the page to SUBMIT_ARCHIVE_DOT_ORG
  • screenshot: RESOLUTION values: [1440,900]/1024,768/...
  • user agent: WGET_USER_AGENT values: [Wget/1.19.1]/"Mozilla/5.0 ..."/...

Index Options:

  • html index template: INDEX_TEMPLATE value: templates/index.html/...
  • html index row template: INDEX_ROW_TEMPLATE value: templates/index_row.html/...

(See defaults & more at the top of

To tweak the outputted html index file's look and feel, just copy the files in templates/ somewhere else and edit away. Use the two index config variables above to point the script to your new custom template files.

The chrome/chromium dependency is optional and only required for screenshots and PDF output, can be safely ignored if both of those are disabled.

Publishing Your Archive

The archive produced by ./ is suitable for serving on any provider that can host static html (e.g. github pages!).

You can also serve it from a home server or VPS by uploading the archive folder to your web directory, e.g. /var/www/pocket and configuring your webserver.

Here's a sample nginx configuration that works to serve archive folders:

location /pocket/ {
    alias       /var/www/pocket/;
    index       index.html;
    autoindex   on;                         # see directory listing upon clicking "The Files" links
    try_files   $uri $uri/ =404;

Make sure you're not running any content as CGI or PHP, you only want to serve static files!

Urls look like:

Security WARNING & Content Disclaimer

Hosting other people's site content has security implications for other sites on the same domain, make sure you understand the dangers of hosting other people's CSS & JS files on a shared domain. It's best to put this on a domain/subdomain of its own to slightly mitigate CSRF attacks.

You may also want to blacklist your archive in /robots.txt if you don't want to be publicly assosciated with all the links you archive via search engine results.

Be aware that some sites you archive may not allow you to rehost their content publicly for copyright reasons, it's up to you to host responsibly and respond to takedown requests appropriately.

Info & Motivation

This is basically an open-source version of Pocket Premium (which you should consider paying for!). I got tired of sites I saved going offline or changing their URLS, so I started archiving a copy of them locally now, similar to The Way-Back Machine provided by Self hosting your own archive allows you to save PDFs & Screenshots of dynamic sites in addition to static html, something doesn't do.

Now I can rest soundly knowing important articles and resources I like wont dissapear off the internet.

My published archive as an example:

Manual Setup

If you don't like running random setup scripts off the internet (:+1:), you can follow these manual setup instructions.

1. Install dependencies: chromium >= 59,wget >= 1.16, python3 >= 3.5 (google-chrome >= v59 also works well)

If you already have Google Chrome installed, or wish to use that instead of Chromium, follow the Google Chrome Instructions.

# On Mac:
brew cask install chromium  # If you already have Google Chrome/Chromium in /Applications/, skip this command
brew install wget python3

echo -e '#!/bin/bash\n/Applications/ "$@"' > /usr/local/bin/chromium-browser  # see instructions for google-chrome below
chmod +x /usr/local/bin/chromium-browser
# On Ubuntu/Debian:
apt install chromium-browser python3 wget
# Check that everything worked:
chromium-browser --version && which wget && which python3 && which curl && echo "[√] All dependencies installed."

2. Get your bookmark export file:

Follow the instruction links above in the "Quickstart" section to download your bookmarks export file.

3. Run the archive script:

  1. Clone this repo git clone
  2. cd bookmark-archiver/
  3. ./ ~/Downloads/bookmarks_export.html

You may optionally specify a third argument to export.html [pocket|pinboard|bookmarks] to enforce the use of a specific link parser.

If you have any trouble, see the Troubleshooting section at the bottom.

Google Chrome Instructions:

I recommend Chromium instead of Google Chrome, since it's open source and doesn't send your data to Google. Chromium may have some issues rendering some sites though, so you're welcome to try Google-chrome instead. It's also easier to use Google Chrome if you already have it installed, rather than downloading Chromium all over.

  1. Install & link google-chrome ```bash

    On Mac:

    If you already have Google Chrome in /Applications/, skip this brew command

    brew cask install google-chrome brew install wget python3

echo -e '#!/bin/bash\n/Applications/Google\\ Chrome "$@"' > /usr/local/bin/google-chrome chmod +x /usr/local/bin/google-chrome

# On Linux:
wget -q -O - | sudo apt-key add -
sudo sh -c 'echo "deb [arch=amd64] stable main" >> /etc/apt/sources.list.d/google-chrome.list'
apt update; apt install google-chrome-beta python3 wget
  1. Set the environment variable CHROME_BINARY to google-chrome before running:
env CHROME_BINARY=google-chrome ./ ~/Downloads/bookmarks_export.html

If you're having any trouble trying to set up Google Chrome or Chromium, see the Troubleshooting section below.




On some Linux distributions the python3 package might not be recent enough. If this is the case for you, resort to installing a recent enough version manually.

add-apt-repository ppa:fkrull/deadsnakes && apt update && apt install python3.6

If you still need help, the official Python docs are a good place to start.

Chromium/Google Chrome: depends on being able to access a chromium-browser/google-chrome executable. The executable used defaults to chromium-browser but can be manually specified with the environment variable CHROME_BINARY:

env CHROME_BINARY=/usr/local/bin/chromium-browser ./ ~/Downloads/bookmarks_export.html
  1. Test to make sure you have Chrome on your $PATH with:
which chromium-browser || which google-chrome

If no executable is displayed, follow the setup instructions to install and link one of them.

  1. If a path is displayed, the next step is to check that it's runnable:
chromium-browser --version || google-chrome --version

If no version is displayed, try the setup instructions again, or confirm that you have permission to access chrome.

  1. If a version is displayed and it's <59, upgrade it:
apt upgrade chromium-browser -y
# OR
brew cask upgrade chromium-browser
  1. If a version is displayed and it's >=59, make sure is running the right one:
env CHROME_BINARY=/path/from/step/1/chromium-browser ./ bookmarks_export.html   # replace the path with the one you got from step 1

Wget & Curl:

If you're missing wget or curl, simply install them using apt or your package manager of choice. See the "Manual Setup" instructions for more details.

If wget times out or randomly fails to download some sites that you have confirmed are online, upgrade wget to the most recent version with brew upgrade wget or apt upgrade wget. There is a bug in versions <=1.19.1_1 that caused wget to fail for perfectly valid sites.


No links parsed from export file:

Please open an issue with a description of where you got the export, and preferrably your export file attached (you can redact the links). We'll fix the parser to support your format.

Lots of skipped sites:

If you ran the archiver once, it wont re-download sites subsequent times, it will only download new links. If you haven't already run it, make sure you have a working internet connection and that the parsed URLs look correct. You can check the output or index.html to see what links it's downloading.

If you're still having issues, try deleting or moving the service/archive folder and running again.

Lots of errors:

Make sure you have all the dependencies installed and that you're able to visit the links from your browser normally. Open an issue with a description of the errors if you're still having problems.

Lots of broken links from the index:

Not all sites can be effectively archived with each method, that's why it's best to use a combination of wget, PDFs, and screenshots. If it seems like more than 10-20% of sites in the archive are broken, open an issue with some of the URLs that failed to be archived and I'll investigate.

Hosting the Archive

If you're having issues trying to host the archive via nginx, make sure you already have nginx running with SSL. If you don't, google around, there are plenty of tutorials to help get that set up. Open an issue if you have problem with a particular nginx config.


If you feel like contributing a PR, some of these tasks are pretty easy. Feel free to open an issue if you need help getting started in any way!

  • download closed-captions text from youtube videos
  • body text extraction using fathom
  • auto-tagging based on important extracted words
  • audio & video archiving with youtube-dl
  • full-text indexing with elasticsearch/elasticlunr/ag
  • video closed-caption downloading for full-text indexing video content
  • automatic text summaries of article with summarization library
  • feature image extraction
  • http support (from my https-only domain)
  • try wgetting dead sites from (
  • live updating from pocket/pinboard

It's possible to pull links via the pocket API or public pocket RSS feeds instead of downloading an html export. Once I write a script to do that, we can stick this in cron and have it auto-update on it's own.

For now you just have to download ril_export.html and run each time it updates. The script will run fast subsequent times because it only downloads new links that haven't been archived already.



  • proper HTML templating instead of format strings (thanks to!)
  • refactored into separate files, wip audio & video archiving
  • v0.0.1 released
  • Index links now work without nginx url rewrites, archive can now be hosted on github pages
  • added script & docstrings & help commands
  • made Chromium the default instead of Google Chrome (yay free software)
  • added env-variable configuration (thanks to!)
  • renamed from Pocket Archive Stream -> Bookmark Archiver
  • added Netscape-format export support (thanks to!)
  • added Pinboard-format export support (thanks to!)
  • front-page of HN, oops! apparently I have users to support now :grin:?
  • added Pocket-format export support
  • v0.0.0 released: created Pocket Archive Stream 2017/05/05