Showing posts with label Reporting Services. Show all posts
Showing posts with label Reporting Services. Show all posts

Wednesday, February 25, 2009

Wanted: Data Dude-esque Features for the Rest of the SQL Server Stack

I’m a big fan of Visual Studio Team Developer Edition for Database Professionals (mouthful..!). But what has practically always taken the sheen off of my BI deployments is the clunky way in which the other components get deployed. I like the idea of being able to perform a one click deployment and so do many of the production administrators I deal with.

DB Pro really provides complete control over all aspects of the database development and deployment so why can’t I have the same thing for my Analysis Services deployments. Between the deployment features of Visual Studio and the Analysis Services Deployment Wizard everything appears to be there to do exactly what I would like so all I’m missing is that close nit development environment integration.

Reporting Services suffices to a certain extent apart from being able to control the connection strings in data sources depending on the project / solution configuration. Yes I know that you can re-deploy the data sources or use the scripting capabilities but yet another external mechanism when Data Dude (now in its second iteration) shows exactly how it should be done.

SSIS deployment is something I’ve never been particularly fond of but you can build the files into the folder of your choice so I’ll swallow that…

Maybe there’s something coming in SQL Server 2010 / Visual Studio 2010. One can always hope…

Thursday, February 05, 2009

PerformancePoint Dashboard Performance

I’m currently working on a project that is mixing PPS, Reporting Services 2008, ProClarity Analytics Server and Analysis Services 2008 with a lot of proactive caching….. Nice…!

The development environment we’re using does the job but I had been experiencing some really wild performance issues. One dashboard deployment would be fine and then the next it would run like a dog. This then drove me down a path of tuning every aspect of the solution so I thought I would share a few of my findings up here for posterity.

The cube has a number of measure groups that in turn have a number of partitions. We’re partitioning by month due to the data volumes with a current day partition that is HOLAP on each of the measure groups. I’m using the SQL Server notifications (AS sets a trace against the target table) so everything is pretty much automatic.

Dashboards in the solution consist of a mix of reporting services, PAS and PPS native reports all joined together using PPS filters. First off the filter queries themselves, they weren’t dynamic so we went down the route of putting some server side named sets in and just calling them from Dashboard Designer. Nice and simple so far but this is where the performance problems started to manifest.

With a simple dashboard up and a small number of members in the filter named set, the dashboard was taking far too long to return. There were four filters on this particular page. Looking at profiler I was finding that a refresh of the page after a database update (remember proactive caching involved here) was causing a query to be fired against against Analysis Services and every partition in a single measure group would be read, four times.

The measure group in question happens to be the measure group that contains the cubes default measure. This I considered to be a bad thing as the production environment would have a lot more data and a lot more partitions to be read, four times, when ever a dashboard page was opened with filters.

To confirm the behaviour I created a dummy partition with a dummy measure in and made that the cube default measure. Running the same test showed four queries being fired but this time the partition being read was tiny and the filter response in the dashboard immediate. Un-processing the partition though caused everything to behave correctly without obviously the dummy measure being read.

So point 1 is be careful where you put you default measure. Next up are the reporting services reports. Creating parameters will create hidden data sets in the report that are there to support the available values list in the report parameters. Whenever the report is executed it’s going to want to get the data for those data-sets to populate the list and multi-selects. If, as is the case on this project, the reports are designed to be accessed purely through dashboards using PPS filters there is little point in having these data sets so removed them.

First for each parameter set the available values to none then remove the related data sets (you will most likely have to show the hidden data sets) and redundant data sources. Although these data sets are small the reporting services reports on the dashboard were observably more responsive.

The final overall improvement is to reduce the number of members in the filters and make sure that you have selected the right type of filter for the job. In the case of this project we’ve avoided multi-selects and opted for single select lists where possible as they seem to take less time to render on the client. Improving the performance of the filters is ultimately going to be based on the number of members you can select from. With SP1 of PPS you can now (I think) have cascading filters but in an OLAP environment where you want to select a member at any level within a hierarchy that’s not useful.

Try using clever MDX or security to reduce the member count, it really makes a lot of difference.

Tuesday, October 09, 2007

Reporting Services and Virtual Directories, a Warning

Just a very quick post to make people aware of a little thing that happens with reporting services when you install it on a server and install it under the same virtual directory (e.g. you install it on a SharePoint server and don't allow SharePoint to control the reporting server, e.g. when you have Performance Point Server 2007 CTP4 and are using it with reporting services - mail me if it's confusing).

We found that Report Manager (http://servername:1234/reports) worked correctly but any access via the web service (http://servername:1234/reportserver) would return an error along the lines of...

Microsoft.ReportingServices.UI.WebControlConnection,
ReportingServicesWebUserInterface, Version=9.0.242.0, Culture=neutral,
PublicKeyToken=89845dcd8080cc91 does not implement
IReportServerConnection or could not be found

Spurious to say the least. The solution for us was to remove a couple of lines from the web.config file within the <appsettings> tag in the reporting services virtual directory...

<add key="ReportViewerServerConnection"
value="Microsoft.ReportingServices.UI.WebControlConnection,
ReportingServicesWebUserInterface, Version=9.0.242.0, Culture=neutral,
PublicKeyToken=89845dcd8080cc91" />
  <add key="ReportViewerTemporaryStorage"
value="Microsoft.ReportingServices.UI.ReportViewerTemporaryStorage,
ReportingServicesWebUserInterface, Version=9.0.242.0, Culture=neutral,
PublicKeyToken=89845dcd8080cc91" />

This gets the web service running but guess what, reports via the report manager won't work anymore.

The problem, I am reliably informed on 'tinternet, is that web config's are inherited in IIS therefore the reporting services web service web config settings are inherited from the report manager web config which means that the setting you removed is needed for report manager but breaks the web service.

The solution, is to add the values back into the web config for the report manager but make sure that the web config for the web service virtual directory has the value...

<appSettings>
    <clear/>
</appSettings>

Hey presto, both the report manager and the web service now function correctly.

Apologies if my descriptions of web things are a little on the vague side but I'm reading it as it happens.

BIDS and TFS

I'm a big fan of Team Foundation Server and how it can be used to better organise analysis, development and delivery. I've been especially impressed with a number of the features of Data Dude (or Visual Studio 2005 Team Edition for Database Professionals for the full mouth full), certainly since SR1, and how, with some work, deployments can be made a lot more efficient.

Let me give you an example. I have multiple environments, dev, UAT and production. All of them will have the same code base but different configuration settings for example we will have a table containing configuration settings or source file locations and these are all likely to change depending on what we are testing or deploying. Now Data Dude allows you to embed variables in your pre and post deployment scripts meaning that I don't have to change anything when it comes to things like allowing a accounts read access to data on dev that should not have read access on prod.

All pretty simple really and well worth installing for these kind of features (notice how I gloss over the cost....!)

Now the point "Visual Studio 2005 Team Edition for Business Intelligence Developers". has a nice ring to it but stick it into Google and at the time of writing nothing comes back. I can think of a number of scenarios where non intrusive deployment would be useful without have to rely on post deployment XMLA or scripts using rs.exe or even the wonderful package deployment wizard.

Some of the things I've named may not seem too reasonable but should you really have to go a separate utility every time you want to release code having multiple settings depending on the server or the environment or the fact it's a 64bit deployment or not. Analysis Services seems a prime candidate for this kind of behaviour as user access is defines in roles but I don't want to have to define them within role membership then remove them or go to the Analysis Services Deployment Wizard and define different configuration files when I've invested in a nice shiny Team Foundation Server and trained everyone up on it's use.

Anyway I would be most interested in hearing any thoughts on the above subject as this kind of thing is taking up a lot of time and more and more clients are jumping onto the TFS / agile bandwagon one even demanding to know why this can be done with the databases and not the cubes. You see the point..!

Technorati tags: , , ,

Friday, March 16, 2007

rsInternalError - Now This One's a Doozy...!

If you're running Reporting Services and keep getting random rsInternalErrors that may or may not return errors in the logs concerning things like "There is no data for the field at position n", before you start to rip apart your reports looking for bugs check the make of the processors on your server.

There is a little problem-ette that manifests itself as these kind of errors when you have AMD processors in your machine. We have 2 servers on a clients site that have the 64bit Opteron chips in that were both demonstrating these errors with different reports, data sources and pretty much anything you can imagine. After a little hunt around the information hyper global mega net we found some newsgroup postings points to this Microsoft knowledge base article.

http://support.microsoft.com/default.aspx/kb/895980/en-us

Adding the /usepmtimer switch in the boot.ini file appears, for now, to have cured the problem that was causing very simple reports to fail 20-30% of the time on an under stressed production server.

After my recent problems with SSIS 64bit this was the last thing I needed but suffice to say my cup of joy was overflowing as a dev server that pretty much gave me an rsInternalError on command has worked flawlessly for several hours. If anything changes I'll be sure to mention it.

Tuesday, July 12, 2005

2D 3D

Last week was a titanic one for me to say the least. My ongoing battle shoe horning Analysis Services data into Reporting Services continues with the completion of another 40 reports to add to the others completed so far on the project I've been working on.

To try and make my life a little easier I decided to spend a little time working on a way of easily tweaking each report to minimise the amount of work necessary to complete the report suite.

Bit of background, the report I'm working on has 3 alternate hierarchies in a store dimension which need to be drillable (whilst displaying different subreports at each differing level of each alternate hierarchy). On another axis is a crossjoin of a subset of the measures and what could be best compared to as a product dimension. A better explanation maybe would be to try and visualise the report....

Standard matrix with the first row group displaying a list of the children of the current store hierarchy parameter value. Row group 2 is a list of the children of current product hierarchy parameter unioned with a list of measures which effectively gives you the sales value for product sub categories A B and C and then things like number of stores and number of units sold for the parent category of A B and C.

Finally the column group was driven from a named set containing the previous 14 fiscal periods. As I previously said the report was drillable by store but also by product. This meant that a single report to the user was in fact built up of 40 reports linked and passing parameters between themselves to cater for every level in every dimension multiplied by 3 due to the alternate hierarchies.

What I ended up doing was removing the 'database' fields from the matrix and replacing them with calculated fields meaning I didn't really have to touch the matrix for the changes but only the calculated field for each separate report.

This may seem like going the long way around but for this requirement and this set of data this was quite literally the only solution and meant I was producing reports in several minutes along with thorough testing. Anyway I just thought I would share that one.

Thursday, May 12, 2005

SQL PASS 2005 - Reporting Services Enhancements

Ah Reporting Services. Both a blessing and curse to me in recent times. The first thing a customer always asks me to do with this tool is something it cannot do out of the box. This isn't a problem really but it can be time consuming when you're creating your suite of reports for a client (say 2 dozen) and they decide that they really would like to be able to sort one of the columns (at the end of development). Fortunately Microsoft has taken a hell of a lot on board since the first version of Reporting Services saw the light of day and features that made this an incredibly powerful reporting client have been enhanced 2 fold. Firstly as an improved product in its own right but secondly when combined with 2005 Analysis Services one of the best Microsoft OLAP client reporting tools on the market.

When people saw the first version of SQL Server 2000 Reporting Services the first thing said was probably something along the lines of "where's the MMC snap-in?". Visual Studio has a vastly better environment for this kind of development and thank the lord MS decided to continue this throughout the toolset (que sandals and beards being thrown from the old skool, burn him, BUUURRNN HIMMMM..!). Developers know what they are going to get with Visual Studio and people in the BI world and DBA are going to get more benefit from an intuitive environment that stabilises and speeds up dev time tremendously.

OK less on the environment and more on the presentation and product. Dave Wickert from the SQL Server product unit was the Microsoft employee working the crowd for this one. I managed to speak to him briefly on the evening before and when he said all of his demonstration were to based on OLAP data the words 'Christ' and 'Thank' may have slipped out. I’m never going to be able to go over all the enhancements but I can certainly give an overview of the key areas that will be big in this version.

Right, leave here and go and write some MDX for an OLAP data source then stick that in a matrix, after you’ve watched some paint dry look at your report. OK that’s not at all fair but the current version was not as responsive as I certainly would have liked when it comes to multi-dimensional data. Trying to get the correct data to appear depending on what axis you put the dimension then the subtleties of the order the dimensions were on the axis was a pain. On a scale of 1-10, 10 being child birth, where would you place MDX on multiple hierarchies for display in a flattened dataset? I may use that as an interview question.

2005 Reporting Services with 2005 Analysis Services……….fixed. How the new version of RS interacts with AS 2000 I shall be doing some investigation on.

The next thing I want to mention are the little things. Those featurettes that when Dave mentioned them brought a wave of positive emotion to the room. Multiple selections on parameters, column sorting by measure or header, server trip for drill down (need to get some confirmation on this one but it has massive implications in low bandwidth scenarios), deployment via SQL Management Studio, floating column headers and so on and so on. These are the areas where users have been listened to which in itself is very refreshing, and I know from experience that some of these features could have saved me days of development in recent times.

Lastly is a very big new feature which I was lucky enough to have a glance at early last year and that is the Report Builder. This new tool is driven from the pre-defined data model and deployed to the Report Server for the users. What does this mean? Well if you’re familiar with Business Objects Universes then think of it as similar, sort of. You take a model, create relationships, calculated fields, etc. and put it on the server. Bob from accounts doesn’t need to understand one record from the Invoice table has a one to many relationship with records in the Invoice Line table, he just sees that he can select Invoice and Invoice Line from a list and drag it on to a report layout of his choice.

The tool has a few more tricks up its’ sleeve and I’ll leave the statements “an understanding of the semantic model” and “infinite drill” for another time. Microsoft are quick to state that this is not a replacement for the plethora of reporting tools out there but an extension of the interface. Either way it should open Reporting Services to another user community which for Business Intelligence is nothing but good.