Wednesday, February 25, 2009

Wanted: Data Dude-esque Features for the Rest of the SQL Server Stack

I’m a big fan of Visual Studio Team Developer Edition for Database Professionals (mouthful..!). But what has practically always taken the sheen off of my BI deployments is the clunky way in which the other components get deployed. I like the idea of being able to perform a one click deployment and so do many of the production administrators I deal with.

DB Pro really provides complete control over all aspects of the database development and deployment so why can’t I have the same thing for my Analysis Services deployments. Between the deployment features of Visual Studio and the Analysis Services Deployment Wizard everything appears to be there to do exactly what I would like so all I’m missing is that close nit development environment integration.

Reporting Services suffices to a certain extent apart from being able to control the connection strings in data sources depending on the project / solution configuration. Yes I know that you can re-deploy the data sources or use the scripting capabilities but yet another external mechanism when Data Dude (now in its second iteration) shows exactly how it should be done.

SSIS deployment is something I’ve never been particularly fond of but you can build the files into the folder of your choice so I’ll swallow that…

Maybe there’s something coming in SQL Server 2010 / Visual Studio 2010. One can always hope…

Tuesday, February 24, 2009

SQL Server 2008 SP1 CTP…

… is out and available from all good record stores or if you don’t have a good record store near by you can always try here.

This Week I Have Been Mostly Listening To…

The Golden Spike – Sky Larkin

Tuesday, February 17, 2009

The Mystery of the Disappearing Packet

No it’s not Halloween come early but a story straight from “Tales of the Unexpected”.

So we have a scaled out PerformancePoint architecture with Windows SharePoint Services on one box, the PPS Monitoring services on another and Analysis Services 2008 and SQL Server 2008 on separate machines. In this environment the dashboards deployed to the SharePoint server talk directly to the Analysis Server but in this case they didn’t.

Because there is a firewall between the SharePoint server and the other servers in this environment my immediate thought was firewall but after, a) checking the firewall rules were in place, b) checking Analysis Services was listening on the default port, and c) that we could telnet on port 2383 from the SharePoint box to the Analysis Services machine there was still no indication as to the issue.

Plugging in Microsoft’s Network Monitor (great tool, I thoroughly recommend getting it and using it). I could see the request coming in from the client but nothing going out at all on port 2383. I knew there needed to be something going out on that port as I had checked all of this with the same tool on a similarly configured environment.

Next I moved on to the OLEDB providers themselves. As this is totally SQL Server 2008 it appeared all of the correct drivers had been installed but as you notice when configuring PPS Monitoring, the installation does like to have ADOMD and SQL Client for AS 2005 and SQL 2005. After consuming some information on the interweb about SQL Server 2008 / PPS SP2 installations (here and here) I thought I would give the latest 2005 components a try. To my amazement without even a reboot packets started to trickle to my Analysis Server on port 2383.

Now I will leave you with a blast from the past. Just one thing to note for anyone outside of the UK, this is actually how everyone dances over here. On Saturday nights the Ritzys and Hippodromes up and down the country are filled with this kind of thing.

Enjoy..!

Client Tool Curiosities

Had a bit of a weird one the last couple of days where a cube I had built was demonstrating different behaviour in a variety of tools. Debugging the cube script in Visual Studio provided the values that I expected from the script but when I then viewed the same script in ProClarity I had a very different behaviour.

I then tried the same thing in Management Studio and got a different set of numbers and then finally Excel 2007 provided me with a slightly different behaviour. So how did all of this come about. Well firstly there was an error in my calc, I was adding an integer value to a column where null occurred but this wasn’t producing an error in the visual studio debugger. In fact even though the debugger numbers appeared correct it wasn’t evaluating my script in the way I expected.

ProClarity wouldn’t show the error, that was fine but when I put same MDX that ProClarity was generating into Management Studio the correct numbers appeared. Browsing the cube in Management Studio produced a completely different set of values but that was due to me setting filters in the browser rather than navigating to the values I wanted.

Finally I gave Excel 2007 a go and actually saw the error “#VALUE” which kind of gave away what the problem was and where it was.

So there’s a moral to this story in there somewhere which I’ll leave to yourselves. But I’ve taken away from this the saying that if the number looks right and the number smells right then it’s probably right, maybe, in the right type of tool….. perhaps!

Thursday, February 12, 2009

Friday, February 06, 2009

How Many Departments Does It Take

Sounds like the beginning of a music hall gag and when you think about it, it could well be.

Discussing the the merits of the Microsoft BI strategy in the wake of that PPS announcement with Mark (so as not to show any type of favouritism more comment can be found…)

  • here (Chris Webb)
  • here (Adatis)
  • and here (SSAS Info)
  • but surprisingly little here (Microsoft PPS blog)

… the conversation moved towards things as simple as charting. Now as I see it charts, graphs and the general graphical representation of data is pretty important for what we do in business intelligence and some companies understand this and specialise in providing just that. Looking at what Microsoft do with charting from the top of my head we have;

  • Reporting Services (Licensed from Dundas?)
  • Excel
  • PerformancePoint
  • ProClarity

All of this sitting under one group and three of them focussed on BI. So I again ask the question / present the gag, how many departments does it take? Do I really need to fudge my ProClarity chart to look like a PPS chart because I can’t set the colours in the PPS chart? Should it really be necessary to mock up excel micro charts in reporting services?

The overall scattergun approach here does net lend itself to a cohesive strategy and if the PPS announcement and this simple example of charting is anything to go by we can’t expect anything soon. Come on Microsoft, SQL Server and Analysis Services are bloody fantastic pieces of software. Lets get the front end strategy working as well as the back end does.

Given Half a Chance, This Weekend I Will Be Going All Retro And Playing…

 

R-Type Dimensions - XBLA

Proactive Caching Weirdness

In both 2005 and now 2008 I have both heard other people, and experienced myself, a problem where proactive caching gets itself into a bit of a loop. Running profiler against the analysis server you will see a number of trace notification received message and errors that don’t contain anything useful.

Whilst  haven’t been able to find a solution for this I have got a work around and that is to change the object id of the table that the trace is against. Un-processing the partition that is causing the issue and moving the data to an identical table and changing the names appears to get round the problem although it’s a little concerning that this behaviour is common on both SQL Server 2005 and 2008.

Thursday, February 05, 2009

PerformancePoint Dashboard Performance

I’m currently working on a project that is mixing PPS, Reporting Services 2008, ProClarity Analytics Server and Analysis Services 2008 with a lot of proactive caching….. Nice…!

The development environment we’re using does the job but I had been experiencing some really wild performance issues. One dashboard deployment would be fine and then the next it would run like a dog. This then drove me down a path of tuning every aspect of the solution so I thought I would share a few of my findings up here for posterity.

The cube has a number of measure groups that in turn have a number of partitions. We’re partitioning by month due to the data volumes with a current day partition that is HOLAP on each of the measure groups. I’m using the SQL Server notifications (AS sets a trace against the target table) so everything is pretty much automatic.

Dashboards in the solution consist of a mix of reporting services, PAS and PPS native reports all joined together using PPS filters. First off the filter queries themselves, they weren’t dynamic so we went down the route of putting some server side named sets in and just calling them from Dashboard Designer. Nice and simple so far but this is where the performance problems started to manifest.

With a simple dashboard up and a small number of members in the filter named set, the dashboard was taking far too long to return. There were four filters on this particular page. Looking at profiler I was finding that a refresh of the page after a database update (remember proactive caching involved here) was causing a query to be fired against against Analysis Services and every partition in a single measure group would be read, four times.

The measure group in question happens to be the measure group that contains the cubes default measure. This I considered to be a bad thing as the production environment would have a lot more data and a lot more partitions to be read, four times, when ever a dashboard page was opened with filters.

To confirm the behaviour I created a dummy partition with a dummy measure in and made that the cube default measure. Running the same test showed four queries being fired but this time the partition being read was tiny and the filter response in the dashboard immediate. Un-processing the partition though caused everything to behave correctly without obviously the dummy measure being read.

So point 1 is be careful where you put you default measure. Next up are the reporting services reports. Creating parameters will create hidden data sets in the report that are there to support the available values list in the report parameters. Whenever the report is executed it’s going to want to get the data for those data-sets to populate the list and multi-selects. If, as is the case on this project, the reports are designed to be accessed purely through dashboards using PPS filters there is little point in having these data sets so removed them.

First for each parameter set the available values to none then remove the related data sets (you will most likely have to show the hidden data sets) and redundant data sources. Although these data sets are small the reporting services reports on the dashboard were observably more responsive.

The final overall improvement is to reduce the number of members in the filters and make sure that you have selected the right type of filter for the job. In the case of this project we’ve avoided multi-selects and opted for single select lists where possible as they seem to take less time to render on the client. Improving the performance of the filters is ultimately going to be based on the number of members you can select from. With SP1 of PPS you can now (I think) have cascading filters but in an OLAP environment where you want to select a member at any level within a hierarchy that’s not useful.

Try using clever MDX or security to reduce the member count, it really makes a lot of difference.

This Week I Have Been Mostly Listening To...

Nights Out - Metronomy