Final day of PASS! Right, so not off to a great start. I was dead keen to see Marco Russo’s Vertipaq vs. OLAP session but it was held in the smallest room and was over subscribed. Will have to catch that one on the PASS website once the session recordings are made available.
Enriching Your BI Semantic Models with DAX
– Some nifty DAX functions: DISTINCTCOUNT, RANKX, TOPN, SWITCH (use instead of nested IIF statements), HASONEVALUE (returns true or false), USERELATIONSHIP (use an inactive relationship instead of the default active one), PATH (use for parent-child relationships), PATHITEM (gets a specific item from the defined PATH), LOOKUPVALUE (get an attribute based on another value), USERNAME (can be used for row level security)
– Week time intelligence functions are not available. One will need to add in the relevant attributes to your calendar dimension and then write a custom calculated measure
– There will be a DAX query plan! Available as a new option in SQL Server Profiler
SQLCAT: Configuring and Securing Complex BI Applications in a SharePoint 2010 Environment with SQL Server 2012
– When setting up your SPNs, use both the fully qualified and net bios user names
– Use the -S SPN option: will ensure that duplicate SPNs are not added. Use -X to list duplicate SPNs
– Important to create a service account that will be running the Claims to Windows Token Service. This service will be on all servers that have SharePoint installed
– Constrained delegation will not cross domain boundaries, no matter what
– On the Delegation tab, ensure that the Use any authentication protocol option is selected
– Kerberos does a lot of caching. Some of the caches can only be cleared after rebooting the servers
– Kerberos only needs to be configured for PowerPivot if one of the data sources is a linked SQL server
– Power View does not support the prompting of credentials on an .rsds connection file
Using ColumnStore / Vertipaq Indexes in SQL Server 2012
– Huge, huge, huge performance gains!
– ColumnStore indexes will not always be the best solution e.g. for very restrictive queries
– All data for a single column is stored on a separate page; access only the columns you need
– At some stage the BISM model will struggle with huge data volumes. Because of things like Direct Query, one can use the ColumnStore index to increase performance for BISM models
– Some really smart stuff has been done from an encoding point of view. E.g. Instead of storing the true sales number, an offset is stored and all numbers are divided by this offset resulting in smaller numbers that need to be stored
– Run Length Encoding is also another smart way compression is being handled
– Data can be used in query processing without decompression
– On average, compression will be 1.8 times more effective than page compression
– Consider using ColumnStore indexes on tables that have 5 million + rows
– Create the index on ALL the columns. You can only have one ColumnStore index per table
– Outer joins and Unions will mess with batch processing
Wayne Snyder was an outstanding presenter, loved his ‘old-man with 100 years experience’ presenting style.
And that concludes a great SQL PASS conference! There is some game-changing content on the way and I can’t wait for SQL Server 2012. Next year’s conference will be held in Seattle once again in early November.
Leave a comment