EPM GROOVY – Custom Data Export, Transformation, and Import in EPBCS without Data Management

Thanks to the latest addition of export and import data functions in the Groovy EPM package, we can now export, transform, and load data in any EPBCS application. This, like Data Maps and Data Management, is a privileged function that only administrators can use. As we all know, there are a variety of ways to move data across cubes; however, the groovy-based export and import methods are unique in that they allow us to fully transform each row during runtime. For example, if you have an ASO cube with a completely different dimensionality, we can still transform and load the changed data into the system.

In this post, I’m attempting to move data from a BSO cube to an ASO cube with an extra dimension without using Data Management, Data Maps or Data Grids.

DATA EXPORT – SET ROW AND COLUMN FILTER

The Data Export method includes a row and column filter that allows us to limit the rows and columns that are exported based on the names of the members. If the row and column filters aren’t specified, the full data set will be considered for data export. After we’ve set up the filters, we’ll be able to export the data. To make it more dynamic, I often use RTPs.

Define RTPs, Cube details:
/* RTPS: {RTPScenario} {RTPVersion} {RTPYear} */


class Properties{
     static String SOURCE_CUBE = 'BSOPln1'
     static String TARGET_CUBE = 'ASORpt1'    
}

// Properties
Cube srcCube = operation.application.getCube(Properties.SOURCE_CUBE)
Cube repCube = operation.application.getCube(Properties.TARGET_CUBE)

//fetch user entered RTP values
String scenario = stripQuotes(rtps.RTPScenario.enteredValue)
String version = stripQuotes(rtps.RTPVersion.enteredValue)
String year = stripQuotes(rtps.RTPYear.enteredValue)
Define Row and Column Filters

I’ve written a new function that uses relationship functions to fetch members. Once you’ve defined it, you can use it again and again.

//define row and column filters 
List<String> columnExportFilter = getMembers(Properties.SOURCE_CUBE,'Days', 'Month', 'lvl0descendants')
List<String> products = getMembers(Properties.SOURCE_CUBE,'Products', 'All Products', 'lvl0descendants')
List<String> rowExportFilter = [scenario, version, year] + products

EXPORT DATA

Based on the given definitions, we are ready to export the data. We’re converting the data exporter to an iterator since we need to process each row, transform the data, then import it. As seen below, there are several functions that allow us to access the information of the row, columns, and data values:

srcCube.createDataExporter()
 	.setColumnMemberNames(columnExportFilter)
 	.setRowFilterCriteria(rowExportFilter as String[])
 	.exportData().withCloseable { exportIterator ->		
	   exportIterator.each { expItr->               	
        	List<String> rowMembers = expItr.rowMembers.collect{it.Name} 
                List<String> columnMembers = expItr.columnMembers.collect{it.Name} 
                Map<String, String> valuesMap = expItr.valuesMap
                println ("rowMembers : $rowMembers")                				            								
                println ("columnMembers : $columnMembers") 
                println ("valuesMap: $valuesMap") 
    	}
    }

Output from Job Console:

TRANSFORM AND IMPORT DATA

By reading row, column, and data objects from the data export operation, we must generate rows. The system will be updated with each row created during the operation. As previously stated, I am adding an extra dimension to the row component. For example, my ASO cube has an extra dimension Channel, which I am mapping to ‘No channel’ for all members.

//create mapping for the extra dimensions in ASO

Dimension channelDim = operation.application.getDimension("Channel", repCube)
Member channelMbr = productDim.getMember("No Channel", repCube)
Map<Dimension, Member> unMappedDimMapping = [(channelDim) : channelMbr]
srcCube.createDataExporter()
 	.setColumnMemberNames(columnExportFilter)
 	.setRowFilterCriteria(rowExportFilter as String[])
 	.exportData().withCloseable { exportIterator ->
		       repCube.createDataImportRequest().importData(columnExportFilter).withCloseable { dataImporter ->
	exportIterator.each { expItr->               	
        	Tuple curRowTuple = expItr.getRowTuple().plus(unMappedDimMapping )
                //transform data row -> insert Channel dimension
		RowData updatedRow = createRowData(curRowTuple , expItr.columnMembers, expItr.valuesMap)
            	dataImporter.addRow(updatedRow)				
			}		
    	}
    }	

Output from Job Console:

CONCLUSION

Performance is definitely a concern, It takes about ~15 seconds to load 200 rows of data. That is roughly 6000 cells, Grid builder would take <3 seconds for the same set of data. However there are limited information in the documentation to optimize the performance. Also, Even if the import job fails, the parent job shows successful. It is difficult to get the real status of the imported job. Hope these will be addressed in the upcoming patches.

«
»