FACTOID # 23: Wisconsin has more metal fabricators per capita than any other state.
 
 Home   Encyclopedia   Statistics   States A-Z   Flags   Maps   FAQ   About 
   
 
WHAT'S NEW
 

SEARCH ALL

FACTS & STATISTICS    Advanced view

Search encyclopedia, statistics and forums:

 

 

(* = Graphable)

 

 


Encyclopedia > Web archiving

Web archiving is the process of collecting the Web or particular portions of the Web and ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. Due to the massive size of the Web, web archivists typically employ web crawlers for automated collection. The largest web archiving organization is the Internet Archive which strives to maintain an archive of the entire Web. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content. WWWs historical logo designed by Robert Cailliau The World Wide Web (WWW or simply the Web) is a global, read-write information space. ... It has been suggested that Digital obsolescence be merged into this article or section. ... For other uses of the word Archive, see Archive (disambiguation) Archives refers to a collection of records, and also refers to the location in which these records are kept. ... An archive site is a type of website that stores information on, or the actual, webpages from the past for anyone to view. ... A web crawler (also known as a Web spider or Web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner. ... Internet Archive headquarters. ... United States Library of Congress, Jefferson building A national library is a library specifically established by the government of a nation to serve as the preeminent repository of information for that country. ... A national archive is a central archive maintained by a nation. ...

Contents


Collecting the Web

Web archivists generally archive all types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection. An example of HTML code with syntax highlighting and line numbers In computing, HyperText Markup Language (HTML) is a predominant markup language for the creation of web pages. ... A Style sheet is a feature of desktop publishing programs that store and apply formatting to text. ... JavaScript is the name of Netscape Communications Corporations implementation of ECMAScript standard, a scripting programming language based on the concept of prototypes. ... A digital image is a representation of a two-dimensional image as a finite set of digital values, called picture elements or pixels. ... Digital video is a type of video recording system that works by using a digital, rather than analog, representation of the video signal. ... Metadata (Greek meta after and Latin data information) are data that describe other data. ... Multipurpose Internet Mail Extensions (MIME) is an Internet Standard for the format of e-mail. ... See also authenticity (philosophy) and authentication (which deals only with computer security). ... Provenance is the origin or source from which anything comes. ...


Methods of collection

Remote harvesting

The most common web archiving technique uses web crawlers to automate the process of collecting web pages. Web crawlers typically view web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remotely harvesting web content. Examples of web crawlers frequently used for web archiving include: A web crawler (also known as a Web spider or Web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner. ... A screenshot of a web page. ...

Heritrix is the Internet Archive’s web crawler which was specially designed for web archiving. ... HTTrack is a website copier licensed under the GNU General Public License. ...

On-demand

There are numerous services that may be used to archive web resources "on-demand", using web crawling techniques:

  • WebCite, a service specifically for scholarly authors, journal editors and publishers to permanently archive and retrieve cited Internet references (Eysenbach and Trudel, 2005).
  • Archive-It, a subscription service, allows institutions to build, manage and search their own web archive
  • hanzo:web is a personal web archiving service created by Hanzo Archives that can archive a single web resource, a cluster of web resources, or an entire website, as a one-off collection, scheduled/repeated collection, an RSS/Atom feed collection or collect on-demand via Hanzo's open API.
  • Spurl.net is a free on-line bookmarking service and search engine that allows users to save important web resources.

WebCite is a free tool for scholarly authors to webcite webpages which have previously been archived, to allow readers in the future (10, 20 50, 100 years) to retrieve what has been cited by the author. ...

Database archiving

Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliothèque nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated. A database is a collection of logically related data designed to meet the information needs of one or more users. ... The word schema comes from the Greek word σχήμα (skhēma) that means shape or more generally plan. ... The Extensible Markup Language (XML) is a W3C-recommended general-purpose markup language for creating special-purpose markup languages, capable of describing many different kinds of data. ... The new buildings of the library. ... National Library of Australia as viewed from Lake Burley Griffin The National Library of Australia is located in Canberra, Australia. ... A relational database is a database that conforms to the relational model. ... An XML schema is a description of a type of XML document, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic syntax constraints imposed by XML itself. ...


Transactional archiving

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information. Wikimedia servers architecture The term Web server can mean one of two things: A computer that is responsible for accepting HTTP requests from clients, which are known as Web browsers, and serving them HTTP responses along with optional data contents, which usually are Web pages such as HTML documents and... Icons for Web browser shortcuts on an Apple computer (Safari, Internet Explorer, and Firefox). ... Website - Wikipedia, the free encyclopedia /**/ @import /skins-1. ...


A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams. A transactional archiving system requires the installation of software on the web server, and cannot therefore be used to collect content from a remote website. HTTP (for HyperText Transfer Protocol) is the primary method used to convey information on the World Wide Web. ...


Examples of commercial transactional archiving software include:

Difficulties and limitations

Crawlers

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

  • The robots exclusion protocol may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
  • Large portions of a web site may be hidden in the deep web. For example, the results page behind a web form lies in the deep web because a crawler cannot follow a link to the results page.
  • Some web servers may return a different page for a web crawler than it would for a regular browser request. This is typically done to fool search engines into sending more traffic to a website.
  • Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.

The Web is so large that crawling a significant portion of it takes a large amount of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it. The deep web (or invisible web or hidden web) is the name given to pages on the World Wide Web that are not part of the surface web that is indexed by common search engines. ...


General limitations

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman (2002) states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web." Some web archives that are made publicly accessible like WebCite's or the Internet Archive’s allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite also cites on its FAQ a recent lawsuit against the caching mechanism, which Google won. The public domain comprises the body of all creative works and other knowledge—writing, artwork, music, science, inventions, and others—in which no person or organization has any proprietary interest. ... This articles section called History of Copyright does not cite its references or sources. ... WebCite is a free tool for scholarly authors to webcite webpages which have previously been archived, to allow readers in the future (10, 20 50, 100 years) to retrieve what has been cited by the author. ... Internet Archive headquarters. ... For the search engine produced by this corporation, see Google search; for the underlying technology, see Google platform; For the number, see Googol; for other uses see Google (disambiguation). ...


References

  • Brown, A. (2006). Archiving Websites: a practical guide for information management professionals. Facet Publishing. ISBN 1856045536.
  • Eysenbach, G. and Trudel, M. (2005). "Going, going, still there: using the WebCite service to permanently archive cited web pages". Journal of Medical Internet Research 7 (5).
  • Masanès, J. (ed.) (2006). Web Archiving. Springer-Verlag. ISBN 3540233385.

The Springer-Verlag (pronounced SHPRING er FAIR lahk) was a worldwide publishing company base in Germany. ...

See also

For other uses of the word Archive, see Archive (disambiguation) Archives refers to a collection of records, and also refers to the location in which these records are kept. ... An archive site is a type of website that stores information on, or the actual, webpages from the past for anyone to view. ... It has been suggested that Digital obsolescence be merged into this article or section. ... Heritrix is the Internet Archive’s web crawler which was specially designed for web archiving. ... Internet Archive headquarters. ... The UK Web Archiving Consortium (UKWAC) is a consortium of six leading UK institutions is working collaboratively on a project to develop a test-bed for selective archiving of UK websites. ... See WebCrawler for the specific search engine of that name. ... WebCite is a free tool for scholarly authors to webcite webpages which have previously been archived, to allow readers in the future (10, 20 50, 100 years) to retrieve what has been cited by the author. ...

External links


 
 

COMMENTARY     


Share your thoughts, questions and commentary here
Your name
Your comments

Want to know more?
Search encyclopedia, statistics and forums:

 


Press Releases |  Feeds | Contact
The Wikipedia article included on this page is licensed under the GFDL.
Images may be subject to relevant owners' copyright.
All other elements are (c) copyright NationMaster.com 2003-5. All Rights Reserved.
Usage implies agreement with terms, 1022, m