1 / 64

Archiving the Web – The Bibliothèque nationale de France’s « L’archivage du Web »

Archiving the Web – The Bibliothèque nationale de France’s « L’archivage du Web ». Bert Wendland Bibliothèque nationale de France. Who I am / who we are. Bert Wendland Crawl engineer in the IT department of BnF Semi-joint working group Legal Deposit department 1 head of group

valeriaw
Download Presentation

Archiving the Web – The Bibliothèque nationale de France’s « L’archivage du Web »

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Archiving the Web – The Bibliothèque nationale de France’s « L’archivage du Web » Bert Wendland Bibliothèque nationale de France

  2. Who I am / who we are • Bert Wendland • Crawl engineer in the IT department of BnF • Semi-joint working group • Legal Deposit department • 1 head of group • 4 librarians • IT department • 1 project coordinator • 1 developer • 2 crawl engineers • A network of 80 digital curators Archiving the Web

  3. Session 4 - Web archiving for decision-makers

  4. Session 4 - Web archiving for decision-makers

  5. 27th November 2012 Session 4 - Web archiving for decision-makers 14

  6. 27th November 2012 Session 4 - Web archiving for decision-makers 15

  7. Agenda • Context:I will present the BnF and web archiving as part of its legal mission. • Concepts: I will describe how we operationalise the task of collecting and preserving the French web in terms of data, and how this relates to the general web archive at www.archive.org. • Infrastructure: I will give an overview of the infrastructure that supports this task. • Data acquisition: I will describe our mixed model of web harvesting that combines broad crawls and selective crawls to achieve a good trade-off between breadth and depth in coverage and temporal granularity. • Data storage and access: I will describe the indexing structures that allow users to query this web archive. Session 5 - Integrating web archiving in IT operations

  8. Context The BnF and web archiving as part of its legal mission

  9. The BnF • Bibliothèque nationale de France • About 30 million books, periodicals and others • 10 million at the new site • Yearly 60.000 new books • 400 TB of data in the web archive • 100 TB of new data every year • Two sites • Old site « Richelieu » in the centre of Paris • New site « François-Mitterand » since 1996 • Two levels at the new site • Study library (« Haut-de-jardin »): open stacks • Research library (« Rez-de-jardin »): access to all collection, including web archives 23 May 2013 Archiving the Web 18

  10. The legal deposit 1368 Royal manuscripts of king Charles V in the Louvre 1537 Legal deposit by king Francis I: all editors should send copies of their productions to the royal library 1648 Legal deposit extended to maps and plans 1793 Musical scores 1925 Photographs and gramophone records 1975 Video recordings 1992 CD-ROMs and electronic documents 2002 Websites (experimentally) 2006 Websites (in production) Archiving the Web

  11. Extension of the Legal deposit Act in 2006 • Coverage (article 39) « Sont également soumis au dépôt légal les signes, signaux, écrits, images, sons ou messages de toute nature faisant l’objet d’une communication au public par voie électronique. » • Conditions (article 41 II) « Les organismes dépositaires procèdent à la collecte des signes, signaux, écrits, images sons ou messages de toute nature mis à la disposition du public ou de catégories de public, … Ils peuvent procéder eux-mêmes à cette collecte selon des procédures automatiques ou en déterminer les modalités en accord avec ces personnes. » • Responsibilities (article 50) INA (Institut national de l'audiovisuel) for radio and TV websites BnF for anything else • No permission required to collect, but access to the archive restricted to in-house • The goal is not to gather all or the “best of the Web”, but to preserve a representative collection of the Web at a certain date 23 May 2013 Archiving the Web 20

  12. Concepts How we collect and preserve the French web

  13. Archiving the Web

  14. The Internet Archive • Non-profit organisation, founded 1996by Brewster Kahle in San Francisco • Stated mission of “universal access to all knowledge” • Websites, but also other media like scanned books, movies, audio collections, … • Web archiving from the beginning, only 4 years after the start of the WWW • Main technologies for web archiving: • Heritrix: the crawler • Wayback Machine: access the archive Archiving the Web

  15. Partnership BnF – IA A five-years partnership between 2004 and 2008 Data 2 focused crawls and 5 broad crawls on behalf of BnF Extraction of historical Alexa data concerning .fr back to 1996 Technology Heritrix Wayback Machine 5 Petaboxes Know-how Installation of Petaboxes by engineers of IA Presence of an IA crawl engineer one day a week for 6 months 23 May 2013 Archiving the Web 24

  16. How search engines work Archiving the Web, that’s archiving the files, the links and some meta data. Source : www.brightplanet.com

  17. How the web crawler works Queue of URLs “Seeds”: http://www.site-untel.fr http://www.monblog.fr … Web crawler (“Heritrix”) Connection to the page Storing the data Extraction of links Storage Connection to the page Discovered URLs: http://www.unautre-site.fr Storing the data Extraction of links http://www.autre-blog.fr … Verification parameters: YES NO Connection to the page Storing the data Extraction of links … URL rejected

  18. Validation Selection Planning Experience Monitoring Crawling Quality Assurance Indexing Preservation Access Current production workflow NAS_preload NetarchiveSuite VMware BCWeb Heritrix NetarchiveSuite NAS_qual SPAR Wayback Machine Indexing Process 23 May 2013 Archiving the Web 27

  19. « »

  20. Applications • BCWeb (“BnF Collecte du Web”) • BnF in-house development • Selection tool for librarians: proposition of URLs to collect for selective crawls • Technical validation of URLs by digital curators • Definition of collection packages • Transfer to NetarchiveSuite • NAS_preload (“NetarchiveSuite Pre-Load”) • BnF in-house development • Preparation of broad crawls, based on a list of officially registered domains by AFNIC Archiving the Web

  21. Applications • NetarchiveSuite • Open source application • Collaborative work of: • BnF • The two national deposit libraries in Denmark (the Royal Library in Copenhagen and the State and University Library in Aarhus) • Austrian National Library (ÖNB) • Central and main application of the archiving process • Planning the crawls • Creating and launching jobs • Monitoring • Quality Assurance • Experience evaluation

  22. Applications • Heritrix • Open source application by Internet Archive • Its name is an archaic English word for heiress (woman who inherits) • A crawl is configured as a job in Heritrix, which consists mainly of: • a list of URLs to start from (the seeds) • a scope (collect all URLs in the domain of a seed, stay on the same host, only a particular web page, etc.) • a set of filters to exclude unwanted URLs from the crawl • a list of extractors (to extract URLs from HTML, CSS, JavaScript) • many other technical parameters, for instance to define the “politeness” of a crawl or whether or not obey a website’s robots.txt file Archiving the Web

  23. Applications • The Wayback Machine • Open source application by Internet Archive • Gives access to the archived data • SPAR (“Système de Préservation et d’Archive Réparti”) • Not really an application, it is the BnF’s digital repository • Long-term preservation system for digital objects, compliant with the OAIS (Open Archival Information System) standard, ISO 14721 Archiving the Web

  24. Applications • NAS_qual (“NetarchiveSuite Quality Assurance”) • BnF in-house development • Indicators and statistics about the crawls • The Indexing Process • Chain of shell scripts, developed in-house by BnF Archiving the Web

  25. Data and process model Archiving the Web

  26. Curators Monitoring: dashboard in NetarchiveSuite, filters in Heritrix, answers to webmaster's requests Quality assurance: analysis of indicators, visual control in WB Experience: reports on harvest concerning contents and websites description Engineers Monitoring: dashboard in Nagios, operation on virtual machines, information to give to webmasters Quality assurance: production of indicators Experience: reports on harvest concerning IT exploitation Daily operations: same steps, different actions 23 May 2013 Archiving the Web 35

  27. Challenges What is the French web? Not only .fr, also .com or .org Some data remain difficult to harvest Streaming, databases, videos, JavaScript Dynamic web pages Contents protected by passwords Complex instructions for Dailymotion, paid contents for newspapers 23 May 2013 Archiving the Web 36

  28. Infrastructure The machines that support the task

  29. Platforms Database Pilot NAS Indexer Indexer Indexer Indexer master PostgreSQL Machines with Linux Operational Platform Application 23 May 2013 Archiving the Web 38

  30. Platforms Operational Platform: PFO Trial Run Platform: MAB Pre-production Platform: PFP 1 pilot, 1 indexer master, 2 to 10 indexers, 20 to 70 crawlers. Variable and scalable number of computers Identical setup to the PFO, the MAB (MAB = Marche À Blanc, Trial Run) aims to simulate and test harvests in real conditions for our curator team. Its size is also variable and subject to changes. The PFP is a technical test platform for the use of our engineers team. 23 May 2013 Archiving the Web 39

  31. Platforms hypervisor Our needs: • Flexibility regarding the number of crawlers allocated to a platform • Hardware resources sharing and optimisation • All classical needs of production environments such as robustness and reliability Solution: Virtualisation! • Virtual computers • Configuration « templates » • Resource pool grouping of the computers • Automatic management of all shared resources 1 2 3 4 5 6 7 8 9 23 May 2013 Archiving the Web 40

  32. The DL-WEB cluster Cluster DL-WEB Shared resources 1 2 3 4 5 6 7 8 9 23 May 2013 Archiving the Web 41

  33. Dive into the hardware 1 2 3 4 2 x 9 RAM of 4 GB = 72 GB RAM / machine 4 threads 2 sockets On every socket, 1 CPU 2 cores Total of 16 logical CPUs per machine

  34. Physical Machines 2 x 9 x 4Gb = 72 GB 2 x 2 x 4 = 16 CPU 9 x 72 = 648 GB 9 x 16 = 144 CPU 1 2 3 4 5 6 7 8 9 23 May 2013 Archiving the Web 43

  35. Park of virtual machines

  36. Distributed Resource Scheduler (DRS) and V-motion A virtual machine is hosted on a single physical server at a given time. If the load of VM hosted on one of the servers becomes too heavy, some of the VMs are moved onto another host dynamically and without interruption. If one of the hosts fails, all the VM hosted on this server are moved to other hosts and are rebooted. 23 May 2013 Archiving the Web 45

  37. Fault tolerance (FT) • An active copy of the FT VM runs on another server • If the server where the master VM is hosted fails, the ghost VM instantly takes control without interruption • A copy is then created on a third server • The other VMs are moved and restarted Fault Tolerance can be quite greedy regarding resources especially concerning network consumption. That’s why we have activated this functionality only for the pilot machine. 23 May 2013 Archiving the Web 46

  38. Data acquisition Our mixed model of web harvesting

  39. BnF “mixed model” of harvesting Broad crawls- once a year- .fr domains and beyond Number of websites Project crawls:- one shots - related to an event or a theme Ongoing crawls:- running throughout the year- news or reference websites Calendar year 23 May 2013 Archiving the Web 48

  40. Aggregation of a large number of sources • In 2012: • 2.4 million domains in .fr and .re, provided by AFNIC (Association française pour le nommage Internet en coopération – the French domain name allocation authority) • 3,000 domains in .nc, provided by OPT-NC (Office des postes et télécommunications de Nouvelle-Calédonie – the office of telecommunications of New Caledonia) • 2.6 million domains already present in NetarchiveSuitedatabase • 13,000 domains from the selection of URLs by BnF librarians (in BCWeb) • 6,000 domains from other workflows of the Library that contain URLs as part of the metadata: publishers’ declarations for books and periodicals, the BnF catalogue, identification of new periodicals by librarians, print periodicals that move to online publishing, and others • After de-duplication, this generated a list of 3.3 million unique domains Archiving the Web

  41. Volume of collections Seven broad crawls since 2004 1996-2005 collections thanks to Internet Archive Tens of thousands of focus-crawled websites since 2002 Total size 20 billion URLs 400 Terabytes 23 May 2013 Archiving the Web 50

More Related