Mongoexport get property of nested objects in CSV output

finoutlook picture finoutlook · Jan 31, 2014 · Viewed 8.4k times · Source

I'm trying to get a CSV from mongo-db by using mongoexport.

My data is in this format:

{
    "_id": "99",
    "page_id": NumberLong(1122334455),
    "data": {
        "0": {
            "item_id": NumberLong(123456789),
            "item_name": "item1"

        },
        "1": {
            "item_id": NumberLong(987654321),
            "item_name": "item2"
        },
     },
    "last_updated_utc": ISODate("2013-12-19T13:17:43.994Z")

}

To do this I'm using the following command:

mongoexport -f _id,page_id,last_updated_utc --query {page_id:1122334455} -d mydatabase -c mycollection --csv

This gives the output:

"99",1122334455,2013-12-19T13:17:43.994Z
exported 1 record

The problem is that I need the item_name from the data elements in the output. These are a dynamic array which could contain no items or many items.

If I add data to the fields (-f) parameter, then it will just output this as a JSON string into the CSV, for each object, which doesn't help with using the data in future.

What I'm aiming to get is something like:

"99",1122334455,2013-12-19T13:17:43.994Z,item1
"99",1122334455,2013-12-19T13:17:43.994Z,item2

Almost denormalised, or like an outer-join in SQL. So that its just the data item IDs.

Is this possible? How can I get the item_id into the CSV output?

Answer

Neil Lunn picture Neil Lunn · Feb 1, 2014

Mongoexport is a utility to allow exporting of your data to JSON by default or optionally CSV. Any subdocument information will be, as you have noticed output as JSON as it is the only valid representation of any data that is not a top level field. Thus it is designed for the basic use cases.

For anything else you will need to program your own solution, reading the data and transforming to CSV output.

If at all possible, can you rethink the structure of the data in the first place.The structure under data as keyed sub-documents doesn't make any sense. If it were an array, you could at least get half the job done using the Aggregation Framework.

{
    "_id": "99",
    "page_id": NumberLong(1122334455),
    "data": [
    {
            "item_id": NumberLong(123456789),
            "item_name": "item1"

        },
        {
            "item_id": NumberLong(987654321),
            "item_name": "item2"
        },
     ],
    "last_updated_utc": ISODate("2013-12-19T13:17:43.994Z")

}

This can be transformed with aggregation as:

db.sample.aggregate([
    {$unwind: "$data"},
    {$project: { 
       page_id: 1,
       item_name: "$data.item_name",
       last_updated_utc: 1
     }}
])

which yields

[
     {
         "_id" : "99",
         "page_id" : NumberLong(1122334455),
         "last_updated_utc" : ISODate("2013-12-19T13:17:43.994Z"),
         "item_name" : "item1"
     },
     {
         "_id" : "99",
         "page_id" : NumberLong(1122334455),
         "last_updated_utc" : ISODate("2013-12-19T13:17:43.994Z"),
         "item_name" : "item2"
     }
 ],

Which is very much the denormalized form and gives us more hope of converting to CSV.

The problem with the structure here is since each sub-document in data is keyed and data itself is not an array, you are forced to programatically traverse each element. This also limits the utility of query functions that can be performed as each sub-document needs to be explicitly named.

So there is no tool, and your data is not making things easier. Change it if you can.