How to export and import field collection data in Drupal

Hello!

There are many great tools out there that make importing and exporting content into Drupal nodes very easy. We deal with custom Drupal content often and have extensively used many of the (amazing) tools available, such as node export, views data export and feed import , specifically with the field collection feeds addon.

With all of these solutions, we had limited but no success in exporting our custom content + field collection data. Our field collections contained fields that were configured to store multiple values as an array, which may be why. We never tried the tools that didn’t work with different variations of field options in our field collections. They just didn’t work for us.

So we looked into how hard it would be to write our own drush command to easily integrate into our code push system (the system through which code and data is propagated from our staging to production environments). All we needed was a simple way to export Drupal content to a CSV and then import from the CSV back into Drupal whenever needed.

We thought maybe we could save other people from going through (most) of the trouble that we had to go through to put this together by sharing our process and the code we used to create the drush command. Ultimately we could release this as a drupal module for a wider audience if there’s enough demand for it.

Export data to a CSV

The easier part of this process involves pulling all the node data and exporting it as a CSV for later use. Putting a CSV together in PHP is relatively straightforward, however I’ll paste the function for exporting below as a reference and explain further down :

This is the function triggered by an “Export” command within drush. You can see we’re looking for an option passed called “c”. The syntax would be –c=”field_collection_name”. Because this module was written for our own use, we didn’t put much emphasis on implementing a dynamic command that can simply read all field collections for a content type and then parse all the fields, though if someone wanted to branch our code on github (link at bottom) and contribute that kind of dynamic functionality, that would be most helpful.

You can also see that we have a simple format for our CSV :

Everything’s pretty self explanatory, right? Well you’ll notice in the code snippet that I’m calling a function called node_generator. This is where most of the magic happens when exporting the data.

The bulk of the work done in the above nodes_generator function is pulling the fields from the provided field collection (passed from the export function that triggers the nodes_generator function) and dealing with the data by way of a few nested for-loops.

Again this function is not very dynamic and either would have to be customized to accommodate your custom content or perhaps customized so that you could use additional argument and option fields in the drush command to customize the output. When run, you will see a CSV file get generated with output that looks similar to the following :

The “assigned_nodes” field is the field collection field data I’m pulling and exporting. The “field_collection_name” is self explanatory. The fields “option 1” and “option 2” are fields in the collection group that only contain 1 value (not multi value as with the last field).

The last field in the CSV is assigned NIDs from an Entity reference views widget that allows the user to search for nodes and “Assign” them to the field. All I want to export for this is the node IDs that serve as the “target_id” reference in the backend. You can see each field in the CSV is separated by a comma. In the last field, multi fields are further separated by the pipe character for parsing during import.

Import data from the CSV

The export process again is much simpler than the import process. For the import process, we need to specify a source file and then we need to loop through all the CSV items in order to prepare the data to create a new node and save the data. All the data creation and manipulation is done using Drupal’s Entity metadata wrappers.

There’s quite a lot going on in the above function! Sometimes nested for-loops can make one’s head spin. However its the only way to deal with the multi dimensional arrays as well as the multitude of arrays themselves. There could be much more sanity checks and whatnot, but again remember this is just proof of concept.

The main thing that should be discussed here within the context of the above function is the field collection portion of the function. Basically what took a bit of troubleshooting was how to take the data saved from the CSV and then create a field collection reference for the newly created node and save all the field collection data, including the multi-valued field_grp_assigned collection field.

Remember that CSV field that is separated by the pipe character? Well we use the explode php function to convert it into a proper array and then use the entity wrapper set function to assign the array to the field. The other field sets are either static (single value) variables in the collection or within the regular node fields themselves.

This took some time to put together so I hope its of use to someone out there! Feel free to check out the github repo for the above code. Contribute or branch it into a functional + dynamic module! 🙂

GitHub Repository