Thursday, February 05, 2009

PerformancePoint Dashboard Performance

I’m currently working on a project that is mixing PPS, Reporting Services 2008, ProClarity Analytics Server and Analysis Services 2008 with a lot of proactive caching….. Nice…!

The development environment we’re using does the job but I had been experiencing some really wild performance issues. One dashboard deployment would be fine and then the next it would run like a dog. This then drove me down a path of tuning every aspect of the solution so I thought I would share a few of my findings up here for posterity.

The cube has a number of measure groups that in turn have a number of partitions. We’re partitioning by month due to the data volumes with a current day partition that is HOLAP on each of the measure groups. I’m using the SQL Server notifications (AS sets a trace against the target table) so everything is pretty much automatic.

Dashboards in the solution consist of a mix of reporting services, PAS and PPS native reports all joined together using PPS filters. First off the filter queries themselves, they weren’t dynamic so we went down the route of putting some server side named sets in and just calling them from Dashboard Designer. Nice and simple so far but this is where the performance problems started to manifest.

With a simple dashboard up and a small number of members in the filter named set, the dashboard was taking far too long to return. There were four filters on this particular page. Looking at profiler I was finding that a refresh of the page after a database update (remember proactive caching involved here) was causing a query to be fired against against Analysis Services and every partition in a single measure group would be read, four times.

The measure group in question happens to be the measure group that contains the cubes default measure. This I considered to be a bad thing as the production environment would have a lot more data and a lot more partitions to be read, four times, when ever a dashboard page was opened with filters.

To confirm the behaviour I created a dummy partition with a dummy measure in and made that the cube default measure. Running the same test showed four queries being fired but this time the partition being read was tiny and the filter response in the dashboard immediate. Un-processing the partition though caused everything to behave correctly without obviously the dummy measure being read.

So point 1 is be careful where you put you default measure. Next up are the reporting services reports. Creating parameters will create hidden data sets in the report that are there to support the available values list in the report parameters. Whenever the report is executed it’s going to want to get the data for those data-sets to populate the list and multi-selects. If, as is the case on this project, the reports are designed to be accessed purely through dashboards using PPS filters there is little point in having these data sets so removed them.

First for each parameter set the available values to none then remove the related data sets (you will most likely have to show the hidden data sets) and redundant data sources. Although these data sets are small the reporting services reports on the dashboard were observably more responsive.

The final overall improvement is to reduce the number of members in the filters and make sure that you have selected the right type of filter for the job. In the case of this project we’ve avoided multi-selects and opted for single select lists where possible as they seem to take less time to render on the client. Improving the performance of the filters is ultimately going to be based on the number of members you can select from. With SP1 of PPS you can now (I think) have cascading filters but in an OLAP environment where you want to select a member at any level within a hierarchy that’s not useful.

Try using clever MDX or security to reduce the member count, it really makes a lot of difference.

No comments: