Blank "Import-export"

Good health to all hebraically!


MODx Revolution convenient in many ways. If in MODx Evolution you can do everything in MODx Revolution, you can do it all. If there is imagination and patience. However, after the Revolution many have a question: how to drag and drop content from one engine to another. It's one thing if You have a dozen resources. Here kopipasta to help You. Another thing — the collection of content catalogues and stuff like that.


Background

I have two collections — anecdotes and belinnik. In the first I collected favorite anecdotes, the second story of "Yaplakal", "IThappens" and other interesting portals. It all hung on Evolution 1.0.5. However, one day I translated all his multi-domain website for one engine and one database. In General, switched to Revolution. Naturally, the question arose about content migration. The section "about me" section and the music was simple — copy-paste. On the forum I do not soared — it is still on phpBB. But anecdotical and bikinicom question had to be postponed indefinitely, because skopipastit all the accumulated patience there would not be enough...

Export

On the old site lived a tiny snippet import random anecdote from anecdotic. In fact, anecdotic could export the data. Later I made a special page that exported all the content from the website in JSON format, and forgot about it. When the question arose about the transfer of data remembered about it.

Why JSON? Simply, probably, because I'm damn tired from all the XML parser. Vain, that there is a simple JSON functions json_encode and json_decode. It is extremely convenient makes option with JSON is much more preferable than all other options.

With the export to JSON it's simple. So the contents of the page to export (template, _blank):
the
{"items":[
[[Ditto? &startID=`162` &tpl=`cat` &tplLast=`catLast`]]
]}

Content chunk the cat:
the
 {
"name":"[+pagetitle+]",
"alias":"[+alias+]",
"template":"[+template+]",
"hidemenu":"[+hidemenu+]",
"content":[
[!Ditto? &startID=`[+id+]` &tpl=`item` &tplLast=`itemLast`!]
]
},

catLast — the same, only without a comma at the end. The contents of the chunk item:
the
 {
"name":"[+pagetitle+]",
"alias":"[+alias+]",
"template":"[+template+]",
"hidemenu":"[+hidemenu+]",
"content":"[+content:strip:noquotes+]"
},

itemLast — the same, only without a comma at the end.

Snippet phx:noquotes:
the
<?php
// Remove r & n
return str_replace('"','& q u o t ;',$output); // Remove the spaces in the replacement string! *
?>

* gaps related to how interpreterpath HTML entity..

The result is impressive for such a file. Yes, do not forget to set the type of data to export page. The data type text/javascript. Some makokera you can immediately export the data in JSON Ditto. But the time to understand this issue was not there.

Import

A file is received. What's next? And then I came across an article about creating a social network for MODx and I beheld it programmatically, you can create in MODx Revolution new documents. The idea was born, followed by the snippet:

the
<?php
// Import from JSON file
// Function responsible for adding a resource
function addItem($ctx,$pagetitle,$template,$isfolder,$hidemenu,$parent,$alias,$content,$td){
global $modx;

$newResource = $modx- > newObject('modResource');

$newResource- > fromArray(array(
'pagetitle'= > $pagetitle,
'longtitle'= > $pagetitle,
'content'=>$content,
'template'= > $template,
'isfolder'= > $isfolder,
'hidemenu'=>$hidemenu,
'parent'=>$parent,
'published'= > '1',
'alias'=>$alias,
'context_key'=>$ctx
));

if ($newResource- > save()) {
$id = $newResource- > get('id');
$modx- > cacheManager- > refresh();
$modx->reloadConfig();

if (is_array($td)) {
foreach($td as $key=>$val) {
$its shapes = $modx- > newObject('modTemplateVarResource');
$its shapes->set('contentid',$id);
$its shapes->set('tmplvarid',$key);
$its shapes->set('value',$val);
$its shapes->save();
}
}

return $id;
} else { return false; }
}
// Function responsible for recursive processing of array data
function handleItem($ctx,$item,$parent,$tpls,$tvs,$handleChildren=false){
$hidm = isset($item['hidemenu'])?$item['hidemenu']:'0';
$isf = is_array($item['content'])?'1':'0';
$content = is_array($item['content'])?":$item['content'];

$td = array();
foreach($tvs as $tvn=>$tvv) if (array_key_exists($tvn,$item)) $td[$tvv] = $item[$tvn];
$ret = ";
if ($id = addItem($ctx,$item['name'],$tpl,$isf,$hidm,$parent,$item['alias'],$content,$td)) {
$ret = 'Resource "<b>'.$item['name'].'</b>" imported successfully! '
. 'New ID: <b>'.$id.'</b><br />';
if (is_array($item['content']) && $handleChildren)
foreach ($item['content'] as $i) $ret.= handleItem($ctx,$i,$id,$tpls,$tvs,$handleChildren);
return $ret;
} else { return 'Resource "<b>'.$item['name'].'</b>" not imported!<br />'; }
}
// Chapchama log
$cons = '<h1>Import log item</h1>';
// Import at a time elements (for not strongly productive systems)
$item_count = isset($itemCount)?$itemCount:4;
// Context, where everything is imported
if (!isset($curContext)) $curContext = 'web';
// "Mnemonic" the following items to import (for not strongly productive systems)
$next_items = isset($_GET['jsonimportnext'])?intval($_GET['jsonimportnext']):0;
// Pattern matching
$tpls = array();
if (isset($templates)) {
$tmp = explode(',',$templates);
foreach($tmp as $val) {
$tpls_d = explode('=>',$val);
$tpls['tpl'.$tpls_d[0]] = $tpls_d[1];
}
}
// Mapping TV-settings
$tvs = array();
if (isset($tvParams)) {
$tmp = explode(',',$tvParams);
foreach($tmp as $val) {
$tvs_d = explode('=>',$val);
$tvs[$tvs_d[0]] = $tvs_d[1];
}
}
// The process
if (isset($source) && isset($rootID)) {
if ($import_content = @file_get_contents($source)) {
$import_data = json_decode($import_content,true);
$import_count = count($import_data['items']);
if ($item_count != 0) {
for($c = 0; $c < $item_count; $c++) {
$n = $item_count*$next_items+$c;
if (isset($import_data['items'][$n]))
$cons.= handleItem($curContext,$import_data['items'][$n],$rootID,$tpls,$tvs);
}
$this_res = $modx- > resource- > get('alias');
$this_res.= '.html';

if (($item_count*$next_items+$item_count-1)<$import_count) {
$cons.= '<br /><a href="'.$this_res.'?jsonimportnext='
. ($next_items+1).'">'
. 'Import next items</a><br />';
} else { $cons.= '<br /><a href="'.$this_res.'">Start</a>'; }
} else {
foreach ($import_data['items'] as $item)
$cons.= handleItem($curContext,$item,$rootID,$tpls,$tvs,true);
}
} else { $cons.= 'Cannot get source!<br />'; }
} else { $cons.= 'Invalid execution parameters!<br />'; }

return $cons;


I must say: it does not claim to be a universal solution. The code is almost not commented, alas. Was in a hurry to share with the needy. If the solution seems interesting, I will continue to develop their work and possibly create a full add-on in MODx.

The input snippet receives the following parameters:
the

    source (required) — the source JSON file.

    itemCount — the number of imported elements in one pass (not much of productive systems). The default value is 4. If set to 0, will be processed all at once, moreover, recursively.

    templates — pattern matching. Comma-separated lists mappings in the format old_id= > new_id where old_id — id of the template of the old site, new_id — id of the template of the new site. If the parser does not match, issue an template 0 (blank).

    tvParams — the mapping of the TV-parameters. Comma-separated lists mappings in the format old_name=>new_id where old_name is the name of the variable the old site, new_id — id of a variable of the new site. If the parser does not match, the variable is ignored.

    curContext (required) — the current context. In principle, if not to put the context, then it will be set in the "web".

    rootID (required) — the id of the resource where you will import the documents.



Why talk about performance. But the fact that when I launched the first version of the snippet, where should recursively process all, the server gave me the 502-th error. Simply put web hosting hacked high load. Still — there are so many documents.

How to use

For starters, write a simple template:
the
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="EN"><head>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<base href="/" />
<title>[[*pagetitle]]</title>
<style type="text/css">
body { font: 12px monospace; }
</style>
</head><body><div align="center"><div style="text-align: left; width: 800px;">
[[!importJSON? &source=`[[*sourceURL]]` &itemCount=`6` &templates=`[[*templatesReplace]]` &tvParams=`[[*tvsReplace]]` & & curContext=`[[*currentContext]]` &rootID=`[[*importDestination]]`]]
</div></div></body></html>

Then create and bind to the template TV-sourceURL, templatesReplace, tvsReplace, currentContext, importDestination. Do not swear on the currentContext and tell me about the context_key. In theory You can create one page and import data in different contexts. Actually all. In addition, I will say that I used this thing I do. Just make note that I am in the export template, without the categories, changing every time startID. Due to limitations of the load. The sequence of my actions.
the
  • On the old site open for editing the export file. Put further action "continue".
  • the new website open for editing the file, where we transfer content (the import file). Change the template to import template from the JSON stored. the

  • In the parameters import file, put the current context URL of the export file, pattern matching and TV-parameters. Stored.
  • the
  • In the export file, change the value in startID the id of the parent resource where we can export the content. Stored.
  • the
  • In the import file, put the id of the resource where will be imported. Stored.
  • the
  • call the import file to view. Then repeat until the end appears a reference marked "Start":
      the
    1. Wait until the download is complete.
    2. the
    3. Click on the link "Import next items"
  • the
  • After imported all of the needed resource, return to step 4, if you need anything else to import.


  • Yeah, you know, for greater productivity it would be possible to do all the direct queries to the database. Only, first, not the fact that it would correct the situation with the 502 error. Secondly, there was no time to study what is in the database when you create a resource, in addition to site_content. Thirdly, I would have written this decision, I would have stutters with the wording "a-as same-XPDO".

    Once again, that this is only a preliminary sketch of the solution. Thank you all for your attention to my next bike!
    Article based on information from habrahabr.ru

    Комментарии

    Популярные сообщения из этого блога

    When the basin is small, or it's time to choose VPS server

    Performance comparison of hierarchical models, Django and PostgreSQL

    From Tomsk to Silicon Valley and Back