Awesome Web Archiving 
Web archiving is the process of collecting portions of the World Wide
Web to ensure the information is preserved in an archive for future
researchers, historians, and the public. Web archivists typically employ
Web crawlers for automated capture due to the massive scale of the Web.
Ever-evolving Web standards require continuous evolution of archiving
tools to keep up with the changes in Web technologies to ensure reliable
and meaningful capture and replay of archived web pages.
Contents
Training/Documentation
- Introductions to web archiving concepts:
- Training materials:
- The WARC Standard:
- For researchers using web archives:
Resources for Web Publishers
These resources can help when working with individuals or
organisations who publish on the web, and who want to make sure their
site can be archived.
This list of tools and software is intended to briefly describe some
of the most important and widely-used tools related to web archiving.
For more details, we recommend you refer to (and contribute to!) these
excellent resources from other groups:
Acquisition
- ArchiveBox - A
tool which maintains an additive archive from RSS feeds, bookmarks, and
links using wget, Chrome headless, and other methods (formerly
Bookmark Archiver). (In Development)
- archivenow - A
Python
library to push web resources into on-demand web archives.
(Stable)
- ArchiveWeb.Page - A
plugin for Chrome and other Chromium based browsers that lets you
interactively archive web pages, replay them, and export them as WARC
& WACZ files. Also available as an Electron based desktop
application.
- Auto
Archiver - Python script to automatically archive social media
posts, videos, and images from a Google Sheets document. Read the article
about Auto Archiver on bellingcat.com.
- Browsertrix
Crawler - A Chromium based high-fidelity crawling system, designed
to run a complex, customizable browser-based crawl in a single Docker
container. (Stable)
- Brozzler -
A distributed web crawler (爬虫) that uses a real browser (Chrome or
Chromium) to fetch pages and embedded urls and to extract links.
(Stable)
- Cairn - A npm package
and CLI tool for saving webpages. (Stable)
- Chronicler -
Web browser with record and replay functionality. (In
Development)
- Community Archive -
Open Twitter Database and API with tools and resources for building on
archived Twitter data.
- crau - crau is the way
(most) Brazilians pronounce crawl, it’s the easiest command-line tool
for archiving the Web and playing archives: you just need a list of
URLs. (Stable)
- Crawl - A simple
web crawler in Golang. (Stable)
- crocoite - Crawl
websites using headless Google Chrome/Chromium and save resources,
static DOM snapshot and page screenshots to WARC files. (In
Development)
- DiskerNet - A
non-WARC-based tool which hooks into the Chrome browser and archives
everything you browse making it available for offline replay. (In
Development)
- F(b)arc - A
commandline tool and Python library for archiving data from Facebook using the Graph API.
(Stable)
- freeze-dry -
JavaScript library to turn page into static, self-contained HTML
document; useful for browser extensions. (In Development)
- grab-site -
The archivist’s web crawler: WARC output, dashboard for all crawls,
dynamic ignore patterns. (Stable)
- Heritrix -
An open source, extensible, web-scale, archival quality web crawler.
(Stable)
- html2warc -
A simple script to convert offline data into a single WARC file.
(Stable)
- HTTrack - An open source
website copying utility. (Stable)
- monolith - CLI tool to
save a web page as a single HTML file. (Stable)
- Obelisk - Go
package and CLI tool for saving web page as single HTML file.
(Stable)
- Scoop -
High-fidelity, browser-based, single-page web archiving library and CLI
for witnessing the web. (Stable)
- SingleFile -
Browser extension for Firefox/Chrome and CLI tool to save a faithful
copy of a complete page as a single HTML file. (Stable)
- SiteStory - A
transactional archive that selectively captures and stores transactions
that take place between a web client (browser) and a web server.
(Stable)
- Social Feed
Manager - Open source software that enables users to create social
media collections from Twitter, Tumblr, Flickr, and Sina Weibo public
APIs. (Stable)
- Squidwarc - An
open
source, high-fidelity, page interacting archival crawler that uses
Chrome or Chrome Headless directly. (In Development)
- StormCrawler - A collection
of resources for building low-latency, scalable web crawlers on Apache
Storm. (Stable)
- twarc - A command line
tool and Python library for archiving Twitter JSON data.
(Stable)
- WAIL - A graphical
user interface (GUI) atop multiple web archiving tools intended to be
used as an easy way for anyone to preserve and replay web pages; Python, Electron.
(Stable)
- Warcprox -
WARC-writing MITM HTTP/S proxy. (Stable)
- WARCreate - A Google Chrome
extension for archiving an individual webpage or website to a WARC file.
(Stable)
- Warcworker - An
open source, dockerized, queued, high fidelity web archiver based on
Squidwarc with a simple web GUI. (Stable)
- Wayback - A toolkit
for snapshot webpage to Internet Archive, archive.today, IPFS and
beyond. (Stable)
- Waybackpy -
Wayback Machine Save, CDX and availability API interface in Python and a
command-line tool (Stable)
- Web2Warc - An
easy-to-use and highly customizable crawler that enables anyone to
create their own little Web archives (WARC/CDX). (Stable)
- Web Curator Tool -
Open-source workflow management for selective web archiving.
(Stable)
- WebMemex - Browser
extension for Firefox and Chrome which lets you archive web pages you
visit. (In Development)
- Wget - An open
source file retrieval utility that of version
1.14 supports writing warcs. (Stable)
- Wget-lua - Wget with
Lua extension. (Stable)
- Wpull - A
Wget-compatible (or remake/clone/replacement/alternative) web downloader
and crawler. (Stable)
Replay
- InterPlanetary Wayback
(ipwb) - Web Archive (WARC) indexing and replay using IPFS.
- OpenWayback - The
open source project aimed to develop Wayback Machine, the key software
used by web archives worldwide to play back archived websites in the
user’s browser. (Stable)
- PYWB - A Python 3
implementation of web archival replay tools, sometimes also known as
‘Wayback Machine’. (Stable)
- Reconstructive -
Reconstructive is a ServiceWorker module for client-side reconstruction
of composite mementos by rerouting resource requests to corresponding
archived copies (JavaScript).
- ReplayWeb.page
- A browser-based, fully client-side replay engine for both local and
remote WARC & WACZ files. Also available as an Electron based
desktop application. (Stable)
- warc2html - Converts
WARC files to static HTML suitable for browsing offline or
rehosting.
Search & Discovery
- hyphe - A webcrawler
built for research uses with a graphical user interface in order to
build web corpuses made of lists of web actors and maps of links between
them. (Stable)
- Mink - A Google Chrome
extension for querying Memento aggregators while browsing and
integrating live-archived web navigation. (Stable)
- PANDORÆ
- A desktop research software to be plugged on a Solr endpoint to query,
retrieve, normalize and visually explore web archives.
(Stable)
- playback - A
toolkit for searching archived webpages from
Internet
Archive, archive.today, Memento and beyond. (In
Development)
- SecurityTrails - Web based
archive for WHOIS and DNS records. REST API available free of
charge.
- Tempas v1 - Temporal web
archive search based on Delicious
tags. (Stable)
- Tempas v2 - Temporal web
archive search based on links and anchor texts extracted from the German
web from 1996 to 2013 (results are not limited to German pages, e.g., Obama@2005-2009
in Tempas). (Stable)
- webarchive-discovery
- WARC and ARC full-text indexing and discovery tools, with a number of
associated tools capable of using the index shown below.
(Stable)
- Shine - A prototype web
archives exploration UI, developed with researchers as part of the Big UK Domain Data for the
Arts and Humanities project. (Stable)
- SolrWayback -
A backend Java and frontend VUE JS project with freetext search and a
build in playback engine. Require Warc files has been index with the
Warc-Indexer. The web application also has a wide range of data
visualization tools and data export tools that can be used on the whole
webarchive. SolrWayback
4 Bundle release contains all the software and dependencies in an
out-of-the box solution that is easy to install.
- Warclight - A
Project Blacklight based Rails engine that supports the discovery of web
archives held in the WARC and ARC formats. (In
Development)
- Wasp - A fully
functional prototype of a personal web archive and search
system. (In Development)
- Other possible options for builting a front-end are listed on in the
webarchive-discovery wiki, here.
Utilities
- ArchiveTools -
Collection of tools to extract and interact with WARC files
(Python).
- cdx-toolkit -
Library and CLI to consult cdx indexes and create WARC extractions of
subsets. Abstracts away Common Crawl’s unusual crawl structure.
(Stable)
- Go Get Crawl -
Extract web archive data using Wayback Machine and Common Crawl. (Stable)
- gowarcserver -
BadgerDB-based capture
index (CDX) and WARC record server, used to index and serve WARC files
(Go).
- har2warc -
Convert HTTP Archive (HAR) -> Web Archive (WARC) format (Python).
- httpreserve.info - Service to
return the status of a web page or save it to the Internet Archive.
HTTPreserve includes disambiguation of well-known short link services.
It returns JSON via the browser or command line via CURL using GET.
Describes web sites using earliest and latest dates in the Internet
Archive and demonstrates the construction of Robust Links in its output
using that range. (Golang). (Stable)
- HTTPreserve
linkstat - Command line implementation of
httpreserve.info to describe the
status of a web page. Can be easily scripted and provides JSON output to
enable querying through tools like JQ. HTTPreserve Linkstat describes
current status, and earliest and latest links on
archive.org. (Golang).
(Stable)
- Internet Archive
Library - A command line tool and Python library for interacting
directly with archive.org. (Python).
(Stable)
- httrack2warc -
Convert HTTrack archives to WARC format (Java).
- MementoMap - A
Tool to Summarize Web Archive Holdings (Python). (In
Development)
- MemGator - A
Memento Aggregator CLI and Server (Golang). (Stable)
- node-cdxj - CDXJ file parser
(Node.js). (Stable)
- OutbackCDX -
RocksDB-based capture index (CDX) server supporting incremental updates
and compression. Can be used as backend for OpenWayback, PyWb and Heritrix.
(Stable)
- py-wasapi-client
- Command line application to download crawls from WASAPI (Python).
(Stable)
- The Unarchiver - Program to
extract the contents of many archive formats, inclusive of WARC, to a
file system. Free variant of The Archive Browser (macOS only,
Proprietary app).
- tikalinkextract
- Extract hyperlinks as a seed for web archiving from folders of
document types that can be parsed by Apache Tika (Golang, Apache Tika
Server). (In Development)
- wasapi-downloader
- Java command line application to download crawls from WASAPI.
(Stable)
- Warchaeology -
Warchaeology is a collection of tools for inspecting, manipulating,
deduplicating and validating WARC-files. Stable
- warcdb - A
command line utility (Python) for importing WARC files into a SQLite
database. (Stable)
- warcdedupe -
WARC deduplication tool (and WARC library) written in Rust. (In
Development)
- warc-safe -
Automatic detection of viruses and NSFW content in WARC files.
- WarcPartitioner -
Partition (W)ARC Files by MIME Type and Year. (Stable)
- warcrefs - Web
archive deduplication tools. Stable
- webarchive-indexing
- Tools for bulk indexing of WARC/ARC files on Hadoop, EMR or local file
system.
- wikiteam - Tools
for downloading and preserving wikis. (Stable)
WARC I/O Libraries
- FastWARC
- A high-performance WARC parsing library (Python).
- HadoopConcatGz - A
Splitable Hadoop InputFormat for Concatenated GZIP Files (and
*.warc.gz). (Stable)
- jwarc - Read and write
WARC files with a type safe API (Java).
- Jwat -
Libraries for reading/writing/validating WARC/ARC/GZIP files (Java).
(Stable)
- Jwat-Tools -
Tools for reading/writing/validating WARC/ARC/GZIP files (Java).
(Stable)
- node-warc -
Parse WARC files or create WARC files using either Electron or chrome-remote-interface
(Node.js). (Stable)
- Sparkling
- Internet Archive’s Sparkling Data Processing Library.
(Stable)
- Unwarcit -
Command line interface to unzip WARC and WACZ files (Python).
- Warcat - Tool and
library for handling Web ARChive (WARC) files (Python).
(Stable)
- Warcat-rs -
Command-line tool and Rust library for handling Web ARChive (WARC)
files. (In Development)
- warcio -
Streaming WARC/ARC library for fast web archive IO (Python).
(Stable)
- warctools
- Library to work with ARC and WARC files (Python).
- webarchive
- Golang readers for ARC and WARC webarchive formats (Golang).
Analysis
- Archives Research
Compute Hub - Web application for distributed compute analysis of
Archive-It web archive collections. (Stable)
- ArchiveSpark -
An Apache Spark framework (not only) for Web Archives that enables easy
data processing, extraction as well as derivation.
(Stable)
- Archives
Unleashed Notebooks - Notebooks for working with web archives with
the Archives Unleashed Toolkit, and derivatives generated by the
Archives Unleashed Toolkit. (Stable)
- Archives
Unleashed Toolkit - Archives Unleashed Toolkit (AUT) is an
open-source platform for analyzing web archives with Apache Spark.
(Stable)
- Common Crawl
Columnar Index - SQL-queryable index, with CDX info plus language
classification. (Stable)
- Common Crawl
Web Graph - A host or domain-level graph of the web, with ranking
information. (Stable)
- Common Crawl
Jupyter notebooks - A collection of notebooks using Common Crawl’s
various datasets. (Stable)
- Tweet Archvies
Unleashed Toolkit - An open-source toolkit for analyzing
line-oriented JSON Twitter archives with Apache Spark. (In
Development)
- Web Data Commons -
Structured data extracted from Common Crawl. (Stable)
Quality Assurance
Curation
Other Awesome Lists
Blogs and Scholarship
Mailing Lists
Slack
Web Archiving Service
Providers
The intention is that we only list services that allow web archives
to be exported in standard formats (WARC or WACZ). But this is not an
endorsement of these services, and readers should check and evaluate
these options based on their needs.
Self-hostable, Open Source
Hosted, Closed Source
webarchiving.md
Github