Showing posts with label SQL Server 2008. Show all posts
Showing posts with label SQL Server 2008. Show all posts

Wednesday, March 04, 2009

Crash, Bang, WOLAP..

I’ve been working on a SQL Server 2008 project that uses proactive caching on some partitions to speed up the availability of the data to the end user.

For all the things that have been said and written about ROLAP in the past apart from some issues around traces I’ve found it to behave itself pretty much. Well that is until now.

Since moving into a test environment we’ve been getting a lot these errors;

The description for Event ID 22 from source MSSQLServerOLAPService cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

Internal error: An unexpected exception occurred.

the message resource is present but the message is not found in the string/message table

Actually these errors would come thick and fast every couple of seconds or so until the Analysis Services service decided to stop. Profiling the Analysis Server showed the problem from the OLAP server point of view where it seemed to happen when the aggregations were being merged.;

Internal error: An unexpected exception occurred. Errors in the OLAP storage engine: An error occurred while processing the 'Current Day' partition of the 'Fact Table' measure group for the 'AS2008 Bug Test' cube from the AS2008 Bug Test database. Server: The operation has been cancelled.

I took the OLAP database that I was using and broke it down until I had the minimum amount of code and objects left in there. This meant stripping out the calculations, all of the measure groups, apart from 1, and most of the dimensions. Finally I was left with a cube that had 2 dimensions (one of which was a date dimension) and a single measure.

Within the measure group were 2 partitions, one was standard MOLAP the other a real-time HOLAP partition looking at the same fact table but different date ranges (current day and everything else). Now here’s the thing, while there is data in the HOLAP partition there are no errors and no crashing. With the HOLAP partition not processed, again no errors but as soon as the HOLAP partition was processed and had no data to retrieve then there would be errors galore resulting in the inevitable crash.

A further step back showed that this would all behave as long no aggregations were applied to the HOLAP partition. As soon as a single aggregation was put on the partition and the partition processed the errors would appear until either data was inserted into the source table, to subsequently be picked up in the HOLAP partition, or Analysis Services crashed.

Curiously converting the partition to real-time ROLAP didn’t deliver any errors with or without aggregations and same is true for any of the MOLAP types of storage but as soon as it moved back to HOLAP the errors returned.

Some extra detail. I’ve observed this on both 32bit and 64bit systems. The 64bit server this was first spotted on has the cumulative update package 3 for SQL Server 2008 installed and the 32bit is RTM so it looks like the issue is throughout the codebase. We haven’t gone as far as to install the SP1 CTP due to environment constraints. This looks like it might be a bit of a bug so I’m in the process now of packaging up something to send to Microsoft.

Tuesday, February 17, 2009

The Mystery of the Disappearing Packet

No it’s not Halloween come early but a story straight from “Tales of the Unexpected”.

So we have a scaled out PerformancePoint architecture with Windows SharePoint Services on one box, the PPS Monitoring services on another and Analysis Services 2008 and SQL Server 2008 on separate machines. In this environment the dashboards deployed to the SharePoint server talk directly to the Analysis Server but in this case they didn’t.

Because there is a firewall between the SharePoint server and the other servers in this environment my immediate thought was firewall but after, a) checking the firewall rules were in place, b) checking Analysis Services was listening on the default port, and c) that we could telnet on port 2383 from the SharePoint box to the Analysis Services machine there was still no indication as to the issue.

Plugging in Microsoft’s Network Monitor (great tool, I thoroughly recommend getting it and using it). I could see the request coming in from the client but nothing going out at all on port 2383. I knew there needed to be something going out on that port as I had checked all of this with the same tool on a similarly configured environment.

Next I moved on to the OLEDB providers themselves. As this is totally SQL Server 2008 it appeared all of the correct drivers had been installed but as you notice when configuring PPS Monitoring, the installation does like to have ADOMD and SQL Client for AS 2005 and SQL 2005. After consuming some information on the interweb about SQL Server 2008 / PPS SP2 installations (here and here) I thought I would give the latest 2005 components a try. To my amazement without even a reboot packets started to trickle to my Analysis Server on port 2383.

Now I will leave you with a blast from the past. Just one thing to note for anyone outside of the UK, this is actually how everyone dances over here. On Saturday nights the Ritzys and Hippodromes up and down the country are filled with this kind of thing.

Enjoy..!

Thursday, February 05, 2009

PerformancePoint Dashboard Performance

I’m currently working on a project that is mixing PPS, Reporting Services 2008, ProClarity Analytics Server and Analysis Services 2008 with a lot of proactive caching….. Nice…!

The development environment we’re using does the job but I had been experiencing some really wild performance issues. One dashboard deployment would be fine and then the next it would run like a dog. This then drove me down a path of tuning every aspect of the solution so I thought I would share a few of my findings up here for posterity.

The cube has a number of measure groups that in turn have a number of partitions. We’re partitioning by month due to the data volumes with a current day partition that is HOLAP on each of the measure groups. I’m using the SQL Server notifications (AS sets a trace against the target table) so everything is pretty much automatic.

Dashboards in the solution consist of a mix of reporting services, PAS and PPS native reports all joined together using PPS filters. First off the filter queries themselves, they weren’t dynamic so we went down the route of putting some server side named sets in and just calling them from Dashboard Designer. Nice and simple so far but this is where the performance problems started to manifest.

With a simple dashboard up and a small number of members in the filter named set, the dashboard was taking far too long to return. There were four filters on this particular page. Looking at profiler I was finding that a refresh of the page after a database update (remember proactive caching involved here) was causing a query to be fired against against Analysis Services and every partition in a single measure group would be read, four times.

The measure group in question happens to be the measure group that contains the cubes default measure. This I considered to be a bad thing as the production environment would have a lot more data and a lot more partitions to be read, four times, when ever a dashboard page was opened with filters.

To confirm the behaviour I created a dummy partition with a dummy measure in and made that the cube default measure. Running the same test showed four queries being fired but this time the partition being read was tiny and the filter response in the dashboard immediate. Un-processing the partition though caused everything to behave correctly without obviously the dummy measure being read.

So point 1 is be careful where you put you default measure. Next up are the reporting services reports. Creating parameters will create hidden data sets in the report that are there to support the available values list in the report parameters. Whenever the report is executed it’s going to want to get the data for those data-sets to populate the list and multi-selects. If, as is the case on this project, the reports are designed to be accessed purely through dashboards using PPS filters there is little point in having these data sets so removed them.

First for each parameter set the available values to none then remove the related data sets (you will most likely have to show the hidden data sets) and redundant data sources. Although these data sets are small the reporting services reports on the dashboard were observably more responsive.

The final overall improvement is to reduce the number of members in the filters and make sure that you have selected the right type of filter for the job. In the case of this project we’ve avoided multi-selects and opted for single select lists where possible as they seem to take less time to render on the client. Improving the performance of the filters is ultimately going to be based on the number of members you can select from. With SP1 of PPS you can now (I think) have cascading filters but in an OLAP environment where you want to select a member at any level within a hierarchy that’s not useful.

Try using clever MDX or security to reduce the member count, it really makes a lot of difference.

Wednesday, November 26, 2008

Data Dude 2008

It's finally been released.

Get it from here for all of your database development delights. You will of course have to uninstall any previous CTP previous versions you have but details of what it will or won't upgrade can be found here.

Friday, November 14, 2008

It's me again.....

Tada

Somebody asked me yesterday if I'd been updating this recently and of course I haven't as I'm a slacker so I thought I start the process off again by posting about one of my fave subjects and that's that's the continued battle 32bit / 64bit mixed environments.

I'm currently working on a real life SQL Server 2008 project with all the bells and whistles and everything so we have VS 2008 in the mix as well. Now let me set the stage....

Dev server = 64 bit

SQL Server = 64 bit

Source Database = doesn't matter (I think)

Now I'm going through a firewall so I've had to setup an alias in the configuration manager so I can specify the port of the instance of SQL server that is my source database. In there you have 32 bit and 64 bit configurations so because I'm setting everything up to make sure I can talk with the server I've gone through management studio.

SQL Server Management Studio connects fine as I'm guessing it's a 32bit process and I've set up an alias in the 32bit bucket. I can also create a SSIS package that connects to my destination server, is able to browse the source data, and will execute from the dtexecui application. But what can't I do, well for starters I can't debug as I get an error telling me that it cannot connect to the source database.

Now I create an alias in the 64bit configuration bucket and now I can debug. So what else can I do? Well next thing is to set the project property Run64bitRuntime to false and try to debug with the 64bit alias deleted. Guess what, it runs.

So the moral of the story is don't expect everything to work nicely for you out of the box all of time even if you're in a single technology environment.

Wednesday, September 26, 2007

Developing with MDM

Technorati tags: , , ,

There has been some recent announcements on Microsoft's acquisition of Stratature that have begun to solidify their approach and roadmap for Master Data Management. The MDM tools, as most people had previously guessed, will be very much aligned with the Office toolset and made part of the SharePoint. Personally I think the key thing about this is how this data will be accessed in my day to day job pulling data from here and putting there for user consumption.

Jamie Thompson has a video on YouTube (embedded below) detailing how to consume web services in SSIS 2008. Couple this with Bulldog's (Microsoft's name for absorbing Stratature's EDM+ product) ability to expose the master data and meta data as web services and all kinds of pre-built SOA components become possible.

Although possible with the existing set of tools in SSIS, I would be surprised if a data source component didn't appear at some point for SSIS 2008 that was specifically designed to retrieve master data from Bulldog. After the constant battles I've had getting master data out of organisations due to the cost of managing and maintaining it, having a tool that looks after the work flow and provides simple generic interfaces to the data will be very beneficial indeed.

Bring on the CTP next year....

Tuesday, June 05, 2007

What do they say about boys and their toys....

Well now  I have my new version of writer I'm

posting like a demon....

Anyway let's get up to speed with some SQL Server 2008 web casts coming in July...

SQL Server 2008 LiveMeeting Schedule

Analysis Services - Dimension Design 06/12/07 11:00 am PDT

Change Data Capture 06/13/07 11:00 am PDT

Star Join Query Optimizations 06/19/07 11:00 am PDT

Table Valued Parameters 06/22/07 11:00 am PDT

Declarative Management Framework 06/26/07 11:00 am PDT

MERGESQL Statement 06/29/07 11:00 am PDT

Technorati tags: , ,

SQL Server 2008...

The June CTP is out....

Get it while it's hot....!

https://connect.microsoft.com/SQLServer/content/content.aspx?ContentID=5395