Why should we keep our web and how can you do? Here you can read more about this exciting field!
Archiving an organisations webpages means storing earlier versions of the website to be able to look at, search in, show and prove what earlier has been published.
Read more »
On 28 June 2012, the Swedish guidance for web development became official.
”Web crawler”, ”web spider”, ”web robot” or ”web scutter” is a computer software or a search engine that collects content from websites in an automatic way by following the pages linkage. Web crawlers are mainly used by search engines like "Google" and "Yahoo" and for webarchiving.
In spring 2008 LDB Centre and Luleå University of Technology worked together with a project about preservation of websites. This part is from the projects final report and describes three different tools for collecting websites: Heritrix, HTTrack Website Copier and PageNest. The entrie study (written mainly in Swedish) can be found in the report CODA-WEBB (Projects/CODA/CODA-WEBB).
Information about the file format WARC for preservation of web, and a link to Open source tools.
About Internet Archive and the Wayback machine for viewing collected web pages.
IIPC, International Internet Preservation Consortium, has since the start 2003 been an important organization in the area of web preservation.
Videos about digital preservation from Library of Congress in US.
In November 2009 KB (The National Library of Sweden) held an international workshop on web archiving: "Archiving the Web - New Perspectives". The recording can be found via the link below or a link on the page Library/Nordic countries/Sweden/National Library of Sweden.
Only in Swedish
Luleå University of Technology • 971 87 Luleå • Phone: +46 (0)920 49 10 00 • Fax: +46 (0)920 49 13 99
E-mail: General, Studies • Orgnr: 202100-2841 © LTU 2013
About this website