Monday, November 02, 2009

Azure – It’s Not Big and it’s Not Clever

I’ve played with Azure and yes it does what it’s supposed to but coming from a business intelligence background I’m struggling to see what kind of advantage my area of focus will gain. I would love to see huge databases accessible in the cloud attached to huge cubes that use huge ROLAP partitions to run huge queries but until that time comes along I’m going to sit on a huge fence, watch and wait.

One project I’m working on has dimensions with over 50 million members in. Due to the business and the way it works a Process Update has to be performed every night. Lots of attribute hierarchies and lots of members means lots of time taken to complete. Throwing more hardware at it makes this faster but

what happens when the cost of the infrastructure exceeds the value they receive from their analysis. This client will look at how they can reduce costs elsewhere whether it be licensing (open source perhaps) or the platform (Amazon EC2 perhaps) neither of which are going to help Microsoft.

When I first saw that the Azure platform was changing from it’s early form into what I considered a natural progression of the SQL Server platform from Express to Enterprise to Azure I was pretty positive. This coupled with Analysis Services could mean smaller companies that produce a lot of data could get top class analytics without having to break the bank with massive hardware expense. But I haven’t seen anything that makes me feel confident that this will happen. Perhaps the Azure OLAP platform will be huge 128bit version of Excel with PowerPivot (don’t get me started) who knows, someone should at least.

Azure, PowerPivot, Reporting Services, Performance Point, ProClarity, MDM. In my opinion all of these things should be part of a single cohesive strategy to allow users to access data in a simple way whilst ensuring organisations can maintain a level of data governance and I’m just not seeing it. Why do I have to still tell a client that they need to use this tool because that’s the one you can change the line colours in but you’ll need to use the other tool for pie charts and oh you want it integrated into your Portal then you’ll need this one instead. It’s scatter gun, not particularly professional and very worrying.

I’m seeing this from a very focused view but from my experience a real world view and I just can’t see Azure fitting in to it at the moment. I would love to have my mind changed so please do.

Technorati Tags: ,

Thursday, July 30, 2009

The Grand Microsoft Crossover

The rumour on the grapevine is that Microsoft are preparing an all out offensive on rivals Google, Apple and Sony by drawing upon their expertise in the various groups around the Redmond giant.
With the positive reception gained by Project Natal at E3 earlier this year and the planned release of Windows 7 and Office 2010 over the next 18 months, Microsoft are embarking on a strategy that they hope will smite their competition.

Codenamed "Project Flap" Microsoft will make the Natal hardware the de-facto method for interfacing with all of their other offerings. Users will be able to use Natal to retrieve lost Word documents by either jumping up and down furiously in front of their screen or by using specially adapted four letter keywords. "This will revolutionise how people will work in the next decade" a Microsoft source close to Project Flap is quoted as saying. But it's benefits don't stop at improved interaction with the next generation of software "our research has demonstrated that looking like a complete tool in the workplace has an impact on how people are perceived". When pressed on whether this impact would be negative the source responded, "at this time we can't comment on the details of the research".


Before everyone pins their hopes on every office looking like a rehearsal room for Britain's Got Talent, Microsoft's rivals are sure to respond. Google is rumoured to be looking at a free camera add on for Google Docs that will work in a similar way to Natal. The major difference being that users will have to stand facing away from the screen as adverts are projected on to their backs.
Sony also has plans in this space. They will introduce a new PS3 / Vaio office application that will provide users with a cut down feature set that will only work with Sony's proprietary SAD (Sony Augmented Document) format. Whatever Apple will do will of course be White and expensive.

So as Microsoft's rivals prepare to return fire and stick 2 fingers up at Natal they had better take care, flicking the "V" is the new command in Windows 7 for formatting your C drive..!

Tuesday, July 28, 2009

Over The Last Several Weeks I Have Been Mostly Listening To…


Hands – Little Boots


La Roux – La Roux


Lungs – Florence + the Machine

Spotify Playlist

A Little Something About PPS Filters (part 3)

It’s been a while since I started on this topic so hopefully I’ll remember what I was attempting to say.

For a particular client we have had issues with a multi level dimension hierarchy that contains several thousand members. As I’ve said before and numerous people have experienced PPS filter performance is not great (word on the grapevine is though this is no longer an issue in v2). For various reasons, creating multiple dashboards to satisfy different users and restricting the users using OLAP security weren’t options so we came up with a “third way”.

The third way was to provide the users with the ability to change their preferences and alter what they could see in the report hierarchy. First we needed to provide the users with a screen where they could make their selections. This was as simple as providing a PPS dashboard page that contained a filter and a report that used the filter.

The filter is based off of tabular data as explained in the last

post. The data itself is a parent child data set generated from the dimension that I want to restrict turning my source table of the dimension from something that looks like this;


To something that looks like like this;


There are reasons I’ve done things like place a letter a the beginning of each of the ID’s. First it ensures that every member has a different ID and secondly it identifies the level of the original hierarchy the member is from without having to write queries to work out this out.

This has been placed into a view which is just a series of select statements from each level of the hierarchy with unions between. With that source view a tabular filter can be built and put into a dashboard page. At the top of the view though I’ve put a statement in for the “All” member. Remember this isn’t Analysis Services so there’s no dimension and no “All” member.


Again referencing back to the previous post on this, for the filter value to be retained in the ParameterValues table you will need to associate it with a report. A simple reporting services report will do and my preference is to provide the user with some feedback on the selections they’ve made. If they have selected the “All” member then they get a list of all of the locations otherwise only the members that have been selected.

The reporting services input parameter is from the filter passed into some SQL that in this case looks like this;

    SELECT 1
    FROM vwDimTown
    WHERE 'ALL' IN (@Location)) 
   SELECT DISTINCT CountryDesc, RegionDesc
   FROM vwDimTown
   SELECT DISTINCT CountryDesc, RegionDesc
   FROM vwDimTown
   WHERE 'D' + CAST(CountyID AS varchar(5))
      IN (@Location)
   OR 'C' + CAST(RegionID AS varchar(5))
      IN (@Location)
   OR 'B' + CAST(CountryID AS varchar(5))
      IN (@Location)

With the reporting services report on the page we now have a mechanism for capturing a users preferences like so;


Retrieving the preferences is the next part so for that we build on the SQL on the previous post that shows how to extract rows from the ParameterValues table in a meaningful way and for this we will put the logic into a stored procedure.

The procedure will perform a number of tasks in addition to pulling the selected members out of the PPSMonitoring database. We also need the final output to be formatted as an MDX set in the format {member 1, member 2, member 3, member n}. So lets break down the procedure into its constituent parts. First the declarations.

CREATE PROCEDURE uspGetFilterPreference 
     @Login varchar(255)
    ,@FilterID varchar(50) 
    ,@StrToSet varchar(max) OUTPUT 

The first of the variables, Login, is the logon of the person whose preferences we are trying to find. FilterID is the GUID identifying the filter we are interested in. Finally StrToSet is the output variable that will contain my final MDX set string.

DECLARE @Keys table( 
MemberKey varchar(255)) 
DECLARE @OutputString varchar(max) 
INSERT @Keys(MemberKey) 
                AS VARCHAR (255) 
                ), '<LocationID>', '' 
            ), '</LocationID>', '') 
    AS MemberKey 
FROM PPSMonitoring..ParameterValues 
CROSS APPLY SerializedXml.nodes
    AS d(Members) 
WHERE [Login] = @Login 
AND ParameterUniqueName = @FilterID

This section gets the values out of the ParameterValues table and loads them into a table variable called Keys. Along the way we strip out any XML syntax (element names, tags, etc.) leaving a table variable comtaining the key values from the tabular filter.

    SELECT 1 
    FROM @Keys  
    WHERE MemberKey = 'ALL')
    SELECT @OutputString = 

Here we check to see if the “All” member has been selected. If it has then we set the OutputString variable to be all members from the County attribute hierarchy of my Town dimension and NOT the user hierarchy. This is really important for later on.

If the “All” member has not been selected then we work out what members have been selected and using a set of union statements get a list of true dimension keys

  SELECT @OutputString = 
      SELECT ', ' 
            + '[Town].[County].&[' 
            + CAST(ID AS varchar(10)) 
            + ']' 
      FROM (
       SELECT ID 
       FROM DimTownTabular 
       WHERE ParentLocationID IN 
         (SELECT MemberKey FROM @Keys 
          WHERE LEFT(MemberKey, 1) = 'B') 
       SELECT ID FROM dbo.DimTownTabular 
       WHERE ParentLocationID IN 
         (SELECT MemberKey FROM @Keys 
          WHERE LEFT(MemberKey, 1) = 'C') 
       SELECT ID FROM dbo.DimTownTabular 
       WHERE LocationID IN 
         (SELECT MemberKey FROM @Keys 
          WHERE LEFT(MemberKey, 1) = 'D')
            ) a 
  FOR XML PATH( '' )), 3, 1000 ) 

This is why we have the ID column and the letter prefix. It makes this bit a lot easier to manage. The output of this and the value that get dropped into the OutputString variable is almost exactly what we want. The only issue is that by using the FOR XML PATH statement to produce a comma delimited string of the members it replaces “&“ symbols with “&amp;”. A quick …

SELECT @StrToSet = '{' 
                 + REPLACE(@OutputString,'.&amp;', '.&') 
                 + '}'

… fixes that and puts the curly brackets around to finish off our set syntax and the procedure as a whole.

Run the procedure against the username and filter you’re querying and depending on how many members you selected in the tabular filter you should get something like this;

{[Town].[County].&[26], [Town].[County].&[63]}

Now I would recommend that everyone goes out and gets this book

imageMDX Solutions
(Amazon link)

And go to chapter 10 where there is a section called “Using Stored Procedures for Dynamic Security”. This explains how to use a simple analysis service stored procedure (DLL) to affect what members in a dimension users are able to see. All the code is doing is retrieving a number of rows from a SQL query and passing it into analysis services security as an MDX set object. In my stored procedure I made it simpler and just executed a stored procedure and output a string.

Unfortunately I haven’t got this all packaged up as a project but my recommendation would be to create a C# project build the DLL and also build a command line application that references the DLL just to test everything is working OK.


Also things like error handling and connection string configuration to where the PPS database should all be handled in this code. My function is called “GetPreferredCounties” which has two input variables. The username and the filter ID, both as strings.

After registering the assembly with the Analysis Server we can now setup the final piece. In one of the roles in your cube (you have created one right?), within the dimension data security for the County attribute of the Town dimension I can now put in the syntax to make the stored procedure call and retrieve the string.


In my example it looks like this.


The USERNAME function will pass through the current users logon credentials to the function and the STRTOSET function will take the output string from the function and convert it into something analysis service will understand is a set.

If all has gone well Analysis Services is now using the output from the stored procedure to provide a restricted list of dimension members to a user. A list that the user themselves can control without any other system being put in the way or elevated privileges being provided.

Build another page on your dashboard and this time create and add a report that just displays some info about your filtered dimension. Attach this to a filter based on the analysis services dimension (I tend to use named sets) and now you will only see the members that match your preferences selection.imageimage

Now to tidy up that hierarchy a bit. All those parent members that are just getting in the way and making the navigation more complex than it needs to be. Pay a visit to Boyan Penev’s blog and take a look at a post on Filtering Unneeded Dimension Members in PerformancePoint Filters. By making the source query or named set for your filter based on the logic in the post you just get the top most level necessary which makes the filter a hell of a lot neater and with the added bonus of the top level member becoming the default when you first access it.


There’s a lot going on in this post and unfortunately I haven’t had the chance to package it up into something I can redistribute easily but if there’s any questions then ask away and I’ll help if I can.

Thursday, June 04, 2009

A Little Something About PPS Filters (part 2)

Here I wanted to go through what happens when you click apply on a PerformancePoint filter. Well first lets look at the simple OLAP based filter that I put together for the last post.


On my blank dashboard page with only a single filter I click apply and then I’ll have my selection displayed so the filter looks like it’s registered the filter change.


The problem is that when I refresh the page PPS doesn’t remember my last selection and will return the All member. This isn’t what PPS filters are supposed to do but the reason is simple, what’s the point of having a filter if there’s no report connected to it?

After putting a simple reporting services report on the page and trying the filters again no problems. I can close the browser, log out, reboot my machine no matter what you do the parameter selection is remembered. So lets ask the question, how?

When a user accesses a report linked parameter their selections are stored in the PPS monitoring database in a table called ParameterValues. This table consists of a filter id, which is a GUID as a string, the users login credentials, the last update time and finally a column called SerializedXml.

SerializedXml is an xml typed field that contains a users actual selections for that particular filter. Select * from the table in management studio and click on one of the values in the SerializedXml column and you can see the xml displayed. Forget the schema tags and looking lower down in the xml document to the elements called ‘MyDataTable’.

  <DisplayValue />
  <DisplayValue />

This is where one particular difference between OLAP and tabular filter appears. Look at a the SerializedXml value for an OLAP based filter and you’ll see a GUID appear in the key element. This would seem to be the dimension property MEMBER_GUID but I haven’t been able to confirm this on Analysis Services 2008 but it seems logical enough.

When looking at the SerializedXml for a filter based on tabular data the differences are quite clear.

  <LocationDesc />
  <LocationDesc />

With tabular filters the key assigned within the parent / child relationship is the value stored in the xml and its possible to query it as well using SQL Servers xml query features. The example above can be queried using the following syntax;

SELECT d.Members.query('LocationID')
  AS MemberKey
FROM ParameterValues
CROSS APPLY SerializedXml.nodes
    AS d(Members)
WHERE [Login] = @Login
AND ParameterUniqueName = @FilterID

By providing the login and filter id parameter we can get the selection for a single users filter. The result set is again xml based returning a row for each filter selection.


The element names and tags can be cleared away easily enough with some string manipulation or if anyone know how the SQL xml query could be tweaked to use return just the values themselves I would be interested to hear.

In the next post I’m going to show how this can all be put together to demonstrate how these ideas can be applied to real world scenarios.

This Week I Have Been Mostly Listening To…


Kitsun̩ Maison Compilation 7 РVarious Artists

Spotify Playlist

Wednesday, June 03, 2009

A Little Something About PPS Filters (part 1)

If you have a lot of members, say in the thousands, that you want to display in a PPS filter then you’re going have problems. The current version of PPS doesn’t handle them to well and I surmise that it’s one of the reasons that there’s a configurable setting in the web.config allowing to set the maximum number of members returned in a filter.

Things are compounded when you have hierarchical filters as the number of members is increased by parent members. The obvious way to get around this off the top of your head would be cascading filters but this has its own issues one in particular is that I want the user to be able to use all of the hierarchy in the reports from the all level to the leaf level members.

Hopefully the architecture of PPS v2 will have corrected this little foible but in the meantime we have to work around this. I’m going to talk in the next few posts about the filters and demonstrate a potential workaround but wanted to start off with a bit of an overview on two types of filters available in PPS and that’s OLAP based and tabular based.

The majority of people will have put together an OLAP data based filter. It’s very simple and you can choose to either select the members directly from the dimension, use a named set or a MDX query. I would hazard to guess that most developers will use either named sets or MDX queries.

I’ll use a simple MDX query to generate a set from a dimension I’ve got called “Towns”. This dimension contains a single user hierarchy that looks like this;

  • Country
    • Region
      • County
        • Town Prefix
          • Town

As I only want to display data to the county level in my filter I can set this up with this MDX;

// The top level member
  ,[Town].[Town].[County] // down to the County level
  ,SELF_AND_BEFORE) // including all members

When the filter is built and deployed you’ get something that looks like this;


To achieve the same with a tabular filter you need to put together a parent child structured data set. As my source table has individual fields for each of the hierarchy levels this will need to be created manually using a series of union statements. For example;

  'All' AS Location 
'All' AS ParentLocation
  Country AS Location
AS ParentLocation
  Region AS Location
,Country AS ParentLocation

Now none of what has been written above is new, Earth shattering or of any particular benefit to anyone apart from those who have never touched PPS before but I wanted to mention the distinction between OLAP and tabular data sourced filters as it will become quite important in the next post around how PPS remembers what your last selection was.

Tuesday, March 10, 2009

This Week I Have Been Mostly Listening To…[1]

The Airing of Grievances – Titus Andronicus

Spotify Playlist

Wednesday, March 04, 2009

Crash, Bang, WOLAP..

I’ve been working on a SQL Server 2008 project that uses proactive caching on some partitions to speed up the availability of the data to the end user.

For all the things that have been said and written about ROLAP in the past apart from some issues around traces I’ve found it to behave itself pretty much. Well that is until now.

Since moving into a test environment we’ve been getting a lot these errors;

The description for Event ID 22 from source MSSQLServerOLAPService cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

Internal error: An unexpected exception occurred.

the message resource is present but the message is not found in the string/message table

Actually these errors would come thick and fast every couple of seconds or so until the Analysis Services service decided to stop. Profiling the Analysis Server showed the problem from the OLAP server point of view where it seemed to happen when the aggregations were being merged.;

Internal error: An unexpected exception occurred. Errors in the OLAP storage engine: An error occurred while processing the 'Current Day' partition of the 'Fact Table' measure group for the 'AS2008 Bug Test' cube from the AS2008 Bug Test database. Server: The operation has been cancelled.

I took the OLAP database that I was using and broke it down until I had the minimum amount of code and objects left in there. This meant stripping out the calculations, all of the measure groups, apart from 1, and most of the dimensions. Finally I was left with a cube that had 2 dimensions (one of which was a date dimension) and a single measure.

Within the measure group were 2 partitions, one was standard MOLAP the other a real-time HOLAP partition looking at the same fact table but different date ranges (current day and everything else). Now here’s the thing, while there is data in the HOLAP partition there are no errors and no crashing. With the HOLAP partition not processed, again no errors but as soon as the HOLAP partition was processed and had no data to retrieve then there would be errors galore resulting in the inevitable crash.

A further step back showed that this would all behave as long no aggregations were applied to the HOLAP partition. As soon as a single aggregation was put on the partition and the partition processed the errors would appear until either data was inserted into the source table, to subsequently be picked up in the HOLAP partition, or Analysis Services crashed.

Curiously converting the partition to real-time ROLAP didn’t deliver any errors with or without aggregations and same is true for any of the MOLAP types of storage but as soon as it moved back to HOLAP the errors returned.

Some extra detail. I’ve observed this on both 32bit and 64bit systems. The 64bit server this was first spotted on has the cumulative update package 3 for SQL Server 2008 installed and the 32bit is RTM so it looks like the issue is throughout the codebase. We haven’t gone as far as to install the SP1 CTP due to environment constraints. This looks like it might be a bit of a bug so I’m in the process now of packaging up something to send to Microsoft.

Tuesday, March 03, 2009

This Week I Have Been Mostly Listening To…


Technorati Tags:

Think Before You Speak - Good Shoes

Spotify Playlist

Wednesday, February 25, 2009

Wanted: Data Dude-esque Features for the Rest of the SQL Server Stack

I’m a big fan of Visual Studio Team Developer Edition for Database Professionals (mouthful..!). But what has practically always taken the sheen off of my BI deployments is the clunky way in which the other components get deployed. I like the idea of being able to perform a one click deployment and so do many of the production administrators I deal with.

DB Pro really provides complete control over all aspects of the database development and deployment so why can’t I have the same thing for my Analysis Services deployments. Between the deployment features of Visual Studio and the Analysis Services Deployment Wizard everything appears to be there to do exactly what I would like so all I’m missing is that close nit development environment integration.

Reporting Services suffices to a certain extent apart from being able to control the connection strings in data sources depending on the project / solution configuration. Yes I know that you can re-deploy the data sources or use the scripting capabilities but yet another external mechanism when Data Dude (now in its second iteration) shows exactly how it should be done.

SSIS deployment is something I’ve never been particularly fond of but you can build the files into the folder of your choice so I’ll swallow that…

Maybe there’s something coming in SQL Server 2010 / Visual Studio 2010. One can always hope…

Tuesday, February 24, 2009

SQL Server 2008 SP1 CTP…

… is out and available from all good record stores or if you don’t have a good record store near by you can always try here.

This Week I Have Been Mostly Listening To…

The Golden Spike – Sky Larkin

Tuesday, February 17, 2009

The Mystery of the Disappearing Packet

No it’s not Halloween come early but a story straight from “Tales of the Unexpected”.

So we have a scaled out PerformancePoint architecture with Windows SharePoint Services on one box, the PPS Monitoring services on another and Analysis Services 2008 and SQL Server 2008 on separate machines. In this environment the dashboards deployed to the SharePoint server talk directly to the Analysis Server but in this case they didn’t.

Because there is a firewall between the SharePoint server and the other servers in this environment my immediate thought was firewall but after, a) checking the firewall rules were in place, b) checking Analysis Services was listening on the default port, and c) that we could telnet on port 2383 from the SharePoint box to the Analysis Services machine there was still no indication as to the issue.

Plugging in Microsoft’s Network Monitor (great tool, I thoroughly recommend getting it and using it). I could see the request coming in from the client but nothing going out at all on port 2383. I knew there needed to be something going out on that port as I had checked all of this with the same tool on a similarly configured environment.

Next I moved on to the OLEDB providers themselves. As this is totally SQL Server 2008 it appeared all of the correct drivers had been installed but as you notice when configuring PPS Monitoring, the installation does like to have ADOMD and SQL Client for AS 2005 and SQL 2005. After consuming some information on the interweb about SQL Server 2008 / PPS SP2 installations (here and here) I thought I would give the latest 2005 components a try. To my amazement without even a reboot packets started to trickle to my Analysis Server on port 2383.

Now I will leave you with a blast from the past. Just one thing to note for anyone outside of the UK, this is actually how everyone dances over here. On Saturday nights the Ritzys and Hippodromes up and down the country are filled with this kind of thing.


Client Tool Curiosities

Had a bit of a weird one the last couple of days where a cube I had built was demonstrating different behaviour in a variety of tools. Debugging the cube script in Visual Studio provided the values that I expected from the script but when I then viewed the same script in ProClarity I had a very different behaviour.

I then tried the same thing in Management Studio and got a different set of numbers and then finally Excel 2007 provided me with a slightly different behaviour. So how did all of this come about. Well firstly there was an error in my calc, I was adding an integer value to a column where null occurred but this wasn’t producing an error in the visual studio debugger. In fact even though the debugger numbers appeared correct it wasn’t evaluating my script in the way I expected.

ProClarity wouldn’t show the error, that was fine but when I put same MDX that ProClarity was generating into Management Studio the correct numbers appeared. Browsing the cube in Management Studio produced a completely different set of values but that was due to me setting filters in the browser rather than navigating to the values I wanted.

Finally I gave Excel 2007 a go and actually saw the error “#VALUE” which kind of gave away what the problem was and where it was.

So there’s a moral to this story in there somewhere which I’ll leave to yourselves. But I’ve taken away from this the saying that if the number looks right and the number smells right then it’s probably right, maybe, in the right type of tool….. perhaps!

Thursday, February 12, 2009

Friday, February 06, 2009

How Many Departments Does It Take

Sounds like the beginning of a music hall gag and when you think about it, it could well be.

Discussing the the merits of the Microsoft BI strategy in the wake of that PPS announcement with Mark (so as not to show any type of favouritism more comment can be found…)

  • here (Chris Webb)
  • here (Adatis)
  • and here (SSAS Info)
  • but surprisingly little here (Microsoft PPS blog)

… the conversation moved towards things as simple as charting. Now as I see it charts, graphs and the general graphical representation of data is pretty important for what we do in business intelligence and some companies understand this and specialise in providing just that. Looking at what Microsoft do with charting from the top of my head we have;

  • Reporting Services (Licensed from Dundas?)
  • Excel
  • PerformancePoint
  • ProClarity

All of this sitting under one group and three of them focussed on BI. So I again ask the question / present the gag, how many departments does it take? Do I really need to fudge my ProClarity chart to look like a PPS chart because I can’t set the colours in the PPS chart? Should it really be necessary to mock up excel micro charts in reporting services?

The overall scattergun approach here does net lend itself to a cohesive strategy and if the PPS announcement and this simple example of charting is anything to go by we can’t expect anything soon. Come on Microsoft, SQL Server and Analysis Services are bloody fantastic pieces of software. Lets get the front end strategy working as well as the back end does.

Given Half a Chance, This Weekend I Will Be Going All Retro And Playing…


R-Type Dimensions - XBLA

Proactive Caching Weirdness

In both 2005 and now 2008 I have both heard other people, and experienced myself, a problem where proactive caching gets itself into a bit of a loop. Running profiler against the analysis server you will see a number of trace notification received message and errors that don’t contain anything useful.

Whilst  haven’t been able to find a solution for this I have got a work around and that is to change the object id of the table that the trace is against. Un-processing the partition that is causing the issue and moving the data to an identical table and changing the names appears to get round the problem although it’s a little concerning that this behaviour is common on both SQL Server 2005 and 2008.

Thursday, February 05, 2009

PerformancePoint Dashboard Performance

I’m currently working on a project that is mixing PPS, Reporting Services 2008, ProClarity Analytics Server and Analysis Services 2008 with a lot of proactive caching….. Nice…!

The development environment we’re using does the job but I had been experiencing some really wild performance issues. One dashboard deployment would be fine and then the next it would run like a dog. This then drove me down a path of tuning every aspect of the solution so I thought I would share a few of my findings up here for posterity.

The cube has a number of measure groups that in turn have a number of partitions. We’re partitioning by month due to the data volumes with a current day partition that is HOLAP on each of the measure groups. I’m using the SQL Server notifications (AS sets a trace against the target table) so everything is pretty much automatic.

Dashboards in the solution consist of a mix of reporting services, PAS and PPS native reports all joined together using PPS filters. First off the filter queries themselves, they weren’t dynamic so we went down the route of putting some server side named sets in and just calling them from Dashboard Designer. Nice and simple so far but this is where the performance problems started to manifest.

With a simple dashboard up and a small number of members in the filter named set, the dashboard was taking far too long to return. There were four filters on this particular page. Looking at profiler I was finding that a refresh of the page after a database update (remember proactive caching involved here) was causing a query to be fired against against Analysis Services and every partition in a single measure group would be read, four times.

The measure group in question happens to be the measure group that contains the cubes default measure. This I considered to be a bad thing as the production environment would have a lot more data and a lot more partitions to be read, four times, when ever a dashboard page was opened with filters.

To confirm the behaviour I created a dummy partition with a dummy measure in and made that the cube default measure. Running the same test showed four queries being fired but this time the partition being read was tiny and the filter response in the dashboard immediate. Un-processing the partition though caused everything to behave correctly without obviously the dummy measure being read.

So point 1 is be careful where you put you default measure. Next up are the reporting services reports. Creating parameters will create hidden data sets in the report that are there to support the available values list in the report parameters. Whenever the report is executed it’s going to want to get the data for those data-sets to populate the list and multi-selects. If, as is the case on this project, the reports are designed to be accessed purely through dashboards using PPS filters there is little point in having these data sets so removed them.

First for each parameter set the available values to none then remove the related data sets (you will most likely have to show the hidden data sets) and redundant data sources. Although these data sets are small the reporting services reports on the dashboard were observably more responsive.

The final overall improvement is to reduce the number of members in the filters and make sure that you have selected the right type of filter for the job. In the case of this project we’ve avoided multi-selects and opted for single select lists where possible as they seem to take less time to render on the client. Improving the performance of the filters is ultimately going to be based on the number of members you can select from. With SP1 of PPS you can now (I think) have cascading filters but in an OLAP environment where you want to select a member at any level within a hierarchy that’s not useful.

Try using clever MDX or security to reduce the member count, it really makes a lot of difference.