Mediawiki XML page importing

From Biowikifarm Metawiki
Jump to: navigation, search

This is about importing the text of wiki pages using an xml format. For images or other files see: Batch importing files into MediaWiki.

Creating an export to be then reimported

Use Command Line (shell access to server required!):

 cd /var/www/testwiki; php ./maintenance/dumpBackup.php --full --conf ./LocalSettings.php > ./testwikiexport.xml

(See also DumpBackup.php)

or the special page Special:Export. Highly interesting: Parameters_to_Special:Export

Importing Data from Command Line Interface

This is the preferred method, as it does not create additional versions (compare next section).

Import works directly with 7z (or zip, bzip) compressed xml files! HOWEVER, PRESENTLY .7z DOES NOT WORKTransfer the xml file to the server, and execute (example):

 cd /var/www/v-xxx/w; sudo php ./maintenance/importDump.php /var/www/v-xxx/w/import.xml --conf ./LocalSettings.php
 cd /var/www/v-xxx/w; sudo php ./maintenance/rebuildall.php --conf ./LocalSettings.php
 cd /var/www/v-xxx/w; sudo php ./maintenance/runJobs.php    --conf ./LocalSettings.php --procs=3
 cd /var/www/v-species/o; sudo php ./maintenance/importDump.php ./atmp/import.xml --conf ./LocalSettings.php
 cd /var/www/v-species/o; sudo php ./maintenance/rebuildall.php --conf ./LocalSettings.php
 cd /var/www/v-species/o; sudo php ./maintenance/runJobs.php    --conf ./LocalSettings.php --procs=3
 # e.g. FOR Naturführer:
 cd /var/www/v-on/w; sudo php ./maintenance/importDump.php ./atmp/import.xml --conf ./LocalSettings.php
 cd /var/www/v-on/w; sudo php ./maintenance/rebuildall.php --conf ./LocalSettings.php
 cd /var/www/v-on/w; sudo php ./maintenance/runJobs.php    --conf ./LocalSettings.php --procs=3

(Rebuilding internal indices is necessary after import; rebuildall may be slow and can be replaced with

 cd /var/www/v-on/w; sudo php ./maintenance/rebuildrecentchanges.php    --conf ./LocalSettings.php

if necessary. RunJobs: if import contains complex template relations or when updating template relations, data entries in templates, manually emptying the job queue may be necessary, check Special:Statistics in the wiki. Note: "--procs=3" will run three jobs in parallel, if the server has the necessary number of processor cores.)

Important: for all batchimporting, revisiondate must be set to something newer than all old revisions; else mediawiki will sort the imported revision behind existing revisions. The id in the imported xml is not necessary, however.

Batch deleting pages


 sudo php ./maintenance/deleteBatch.php \
   --conf ./LocalSettings.php \
   -r "remove wrong resolution" ./maintenance/deleteBatch.txt

deleteBatch.txt contains only the filenames.

Note that for File:x.jpg pages, this will delete the file itself AND the page itself, but the file will still seem to exist when called. Only manually clicking "delete all" in file history will finish this. This is probably a bug, the php code attempts to handle file deletions.

Importing Data through Special Pages Web Interface

The Web interface under Special:Import will create extra revisions (in addition to those imported) designating the importing user. If you don't want to document who did a transfer, it may therefore be desirable to use the command-line version (see below). For the web import it may be desirable to create a special "Import-User" so that the name better documents authorship than using a normal username during upload of the xml file. Important creates two revisions for each page: Revision 1 is the imported revision, Revision 2 is the revision documenting the import process. If the imported data alone document this (e.g. when they already are using Import-User and an appropriate comment), it is possible to delete the second revisions in the database (assuming Import-User has ID=4):

 Delete FROM PREFIX_revision 
   WHERE PREFIX_revision.rev_user=4 AND PREFIX_revision.rev_minor_edit=1;
 -- Then need to fix the latest revision stored in page:
   (PREFIX_page LEFT JOIN PREFIX_revision AS R1 ON PREFIX_page.page_latest=R1.rev_id) 
   ON R2.rev_page=PREFIX_page.page_id 
   SET page_latest=R2.rev_id WHERE R1.rev_id Is Null

Create a page (first time)

To do this, export a single page to get the XML-header and footer you need as a template later on. For creating a page, the important things are:

  1. <contributor>your user name</contributor> and your ID <id>123</id> (Preferences-tab User profile see Username: + <uid>)
  2. <title>new title</title> and the
  3. <text xml:space="preserve">Wiki text</text>

Then put it together with the exported XML document header:

<mediawiki xmlns="" 
  xsi:schemaLocation="" version="0.4"
   <!-- … the XML header from an arbitrary wiki page export -->
    <contributor><username>User name</username><id>123</id></contributor>
    <text xml:space="preserve">{{Term
| collection = GBIF
| short URI = cultivar
| full URI =
| label = cultivar
| code = cultivar
| see also =

This will create the page “GBIF:cultivar”.

Note it is:

    <text xml:space="preserve">{{Term
… and not with line break
after < xml:space="preserve"> …
    <text xml:space="preserve">
Before you use the import page Special:Import, make sure you have valid XML. Use a XML checker on your computer first if possible. After a large import of many pages it is recommended to run maintenance scripts from the command line to update category data or data for semantic media wiki:
cd /var/www/v-xxxwiki/w; sudo php ./maintenance/rebuildall.php --conf ./LocalSettings.php
cd /var/www/v-xxxwiki/w; sudo php ./maintenance/runJobs.php    --conf ./LocalSettings.php --procs=3

Creating exports from a database

It is possible to create mediawiki xml from a database, e.g. Microsoft Access tables and queries. However, when pasting this to a text editor, the following has to be observed:

  1. Putting all into one field will often fail, because problems occur when MS-Access-alculated fields exceed a certain size
  2. Exporting in multiple columns may work better. The following needs to be post-fixed in the text:
    1. remove first line with field names
    2. remove tabulator characters (typically replace with blank)
    3. fix double-quote escaping (both in xml attributes (preserve) and inside the element content):

"<text to <text and </page>" to </page> (normally not necessary: "<page> and </comment>"); replace "" with ".

  1. Normally, multiple revision elements are in a single page element. It is possible to import them in separate page elements however (this greatly simplifies some imports!)
  2. When importing through the web interface, additional versions are created, with the date of import. In this case the sequence of imports rather than dates counts, because these additional versions get the date/time of import! - Avoid using the web interface, when importing versions!

Tips: The Wiki code inside the <text xml:space="preserve">…</text> must be encoded properly, thee Importer will not complain if you have a <i>Text</i> in the XML text and you wonder, what’s the reason. Replace at least the following characters:

normal text encoded text
> &gt;
< &lt;
& &amp;

For instance: &nbsp; must be encoded as &amp;nbsp;!

Scrip Approach (Bash, Sed)

  1. export page(s) via Special:Export
  2. get the XML header of the Import-Wiki and save it locally
  3. edit settings section of bash script and run following bash script
  4. check if it is valid XML
  5. import XML either Special:Import or via command line (no blank after \ or any other character!)
    cd /var/www/v-awikipath/here && \
     sudo -u www-data php ./maintenance/importDump.php  --conf ./LocalSettings.php  /path/to/Page-XML-Import.xml && \
     sudo -u www-data php ./extensions/TitleKey/rebuildTitleKeys.php --conf ./LocalSettings.php && \
     sudo -u www-data php ./maintenance/rebuildrecentchanges.php --conf ./LocalSettings.php

Bash Script to Convert or Modify a Special:Export

Bash script: ./ You can make the script executable by

# u o g means: user who owns it (u), other users in the file's group (g), other users (o)
chmod ug+x # add executable mode for the owned user


./ # just show what the output would be
./ > Reimport_wikiname_what-kind_` date '+%Y-%m-%d_%H-%M'`.xml # export to file with a timestamp

The script is considered to be a kind of a fragile script because it depends on space indentation and the assumption that a comment (if present) follows almost right after <contributor></contributor>. But it can be used to do all necessary replacements in on single step

# @description: Convert a XML Wiki export to a new XML re-impoert-file, add comment, user name and user id as specified
# Usage
#   ./ > Reimport_what_wikiname_` date '+%Y-%m-%d_%H-%M'`.xml
# @dependency binary sed
# @dependency saved file of the XML header until closing </siteinfo> of the Re-import-Wiki before all <page> start

# Settings section
wiki_user_name="The Wiki User Name"
wiki_user_id="XXX" # the corresponding user id
reimport_comment="re import of ..."

# reimport_header_file_path: a Special:Export
# reimport_header_file_path: can be any arbitrary Special:Export from the reimport-Wiki. The script only extracts the part it needs

if [[ ! -e $reimport_header_file_path ]];then
  echo -e "Error in $0"
  echo -e "Header file $reimport_header_file_path \e[1mdoes not exist!!\e[0m (stop)"
  exit 1;
if [[ ! -e $export_file_path ]];then
  echo -e "Error in $0"
  echo -e "Wiki export file $export_file_path \e[1mdoes not exist!!\e[0m (stop)"
  exit 1;

sed --silent '/<mediawiki/,/<\/siteinfo>/{p;}' "${reimport_header_file_path}"
sed --silent '/<page>/,/<\/page>/{p;};/<minor\/>/{d}' "${export_file_path}" | sed "
# general replacements
  /<\/contributor>/ {
    /<comment>/b comment_found
    /<comment>/!b comment_not_found
    /<\/contributor>/ { # append comment
\ \ \ \ \ \ <comment>${reimport_comment}</comment>
    # if line contains not (!) '</comment>' go (b)ack to label_add_newlines
    /<\/comment>/!b label_add_newlines 
# perhaps additional sed replacement commands here 
" | sed "
# (d)elete lines with tags not wanted for re import
/^    <id>/{d};
/^      <id>/{d};
/^      <format>/{d};
/^      <model>/{d};
/^      <parentid>/{d};
/^      <sha1>/{d};
/^      <timestamp>/{d};
s@<text xml:space=\"preserve\" bytes=\"[0-9]\+\"@<text xml:space=\"preserve\" @g;
echo "</mediawiki>"

See also: Batch importing files into MediaWiki (that is: images, etc.)