Feeds:
Posts
Comments

Archive for October, 2011

Final day of PASS! Right, so not off to a great start. I was dead keen to see Marco Russo’s Vertipaq vs. OLAP session but it was held in the smallest room and was over subscribed. Will have to catch that one on the PASS website once the session recordings are made available.

Enriching Your BI Semantic Models with DAX

– Some nifty DAX functions: DISTINCTCOUNT, RANKX, TOPN, SWITCH (use instead of nested IIF statements), HASONEVALUE (returns true or false), USERELATIONSHIP (use an inactive relationship instead of the default active one), PATH (use for parent-child relationships), PATHITEM (gets a specific item from the defined PATH), LOOKUPVALUE (get an attribute based on another value), USERNAME (can be used for row level security)
– Week time intelligence functions are not available. One will need to add in the relevant attributes to your calendar dimension and then write a custom calculated measure
– There will be a DAX query plan! Available as a new option in SQL Server Profiler

SQLCAT: Configuring and Securing Complex BI Applications in a SharePoint 2010 Environment with SQL Server 2012

– When setting up your SPNs, use both the fully qualified and net bios user names
– Use the -S SPN option: will ensure that duplicate SPNs are not added. Use -X to list duplicate SPNs
– Important to create a service account that will be running the Claims to Windows Token Service. This service will be on all servers that have SharePoint installed
– Constrained delegation will not cross domain boundaries, no matter what
– On the Delegation tab, ensure that the Use any authentication protocol option is selected
– Kerberos does a lot of caching. Some of the caches can only be cleared after rebooting the servers
– Kerberos only needs to be configured for PowerPivot if one of the data sources is a linked SQL server
– Power View does not support the prompting of credentials on an .rsds connection file

Using ColumnStore / Vertipaq Indexes in SQL Server 2012

– Huge, huge, huge performance gains!
– ColumnStore indexes will not always be the best solution e.g. for very restrictive queries
– All data for a single column is stored on a separate page; access only the columns you need
– At some stage the BISM model will struggle with huge data volumes. Because of things like Direct Query, one can use the ColumnStore index to increase performance for BISM models
– Some really smart stuff has been done from an encoding point of view. E.g. Instead of storing the true sales number, an offset is stored and all numbers are divided by this offset resulting in smaller numbers that need to be stored
– Run Length Encoding is also another smart way compression is being handled
– Data can be used in query processing without decompression
– On average, compression will be 1.8 times more effective than page compression
– Consider using ColumnStore indexes on tables that have 5 million + rows
– Create the index on ALL the columns. You can only have one ColumnStore index per table
– Outer joins and Unions will mess with batch processing

Wayne Snyder was an outstanding presenter, loved his ‘old-man with 100 years experience’ presenting style.

And that concludes a great SQL PASS conference! There is some game-changing content on the way and I can’t wait for SQL Server 2012. Next year’s conference will be held in Seattle once again in early November.

Advertisements

Read Full Post »

Time for some feedback on Day 2…

Thursday, Opening Keynote

– The Fantastic 12 of SQL Server 2012:
+ Required 9’s and Protection
+ Blazing Fast Performance: Performance enhancements to the RDBMs, SSAS and SSIS. And then there is the new ColumnStore Index – really looking forward to using this one in our DW environments!
+ Rapid Data Exploration: Power View and PowerPivot 2.0. Great features and enhancements in this space. Also included is the new Reporting Alerts capability – ability to send out alerts based on data conditions
+ Managed Self-service BI through SharePoint
+ Credible, Consistent Data: BI Semantic Model, DQS and MDS
+ Organisational Compliance: Expanded Audit – User-defined, Filtering, User-defined Server Roles
+ Peace of Mind: Production-simulated Application Testing, System Center Advisor and Management Packs, Expanded Support – Premier Mission Critical
+ Scalable Data Warehousing
+ Fast Time to Solution: SQL Server Appliances – optimised and pre-tuned
+ Extend Any Data, Anywhere: ODBC drivers for Linux and Change Data Capture for SSIS and Oracle, Semantic Search
+ Optimised Productivity: SQL Server Data Tools, Unified Across Database and BI, Deployment and Targeting Freedom
+ Scale on Demand: AlwaysOn, Deployment across public and private, Elastic Scale (scaling in the cloud with SQL Azure Federations)

Data Quality Services – A Closer Look

– Setup your Knowledge Base and then perform a Knowledge Discovery exercise. This is similar to profiling your data and will populate the Knowledge Base
– Spelling mistakes are highlighted just like in Microsoft Word – very nice
– Next up, create a Data Quality Project. This will enable you to validate your data according to the rules that have been defined in the Knowledge Base. Corrections will be made and alternatives suggested
– Data that needs to be cleaned can also be validated against external data sources. Only Azure Data Market right now
– One can setup rules to override the corrections provided by the external data provider
– Term-Based Relations: find and replace
– Matching: one has the ability to create a composite domain (using a delimiter to split out multiple fields)
– New feature – Use Knowledge Base Parsing: use this along with the delimiter to use your Knowledge Base to automatically fix mistakes. E.g. If users had swopped around fields in certain instances causing the parsing to be invalid
– Create a Matching Policy and define weightings to the fields you want to match on
– Create a Data Quality Matching project to determine the duplicates in your data and you’re all done!
– The matching algorithm is a black box, not too much detail given here 🙂

DQS looks like quite an ok product, considering it is only the first release. Only downside is the fact that South Africa will not be able to access most of the external data sources and the ones they can access may not be that useful.

Developing and Managing a BISM in Analysis Services

– Microsoft continuum: Personal BI (PowerPivot), Team BI (PowerPivot for SharePoint) and Corporate BI (Analysis Services)
– Row and Dynamic security will be available!
– SQL Server Data Tools is the new name for BIDS
– Add calculated measures in the measure window. [Measure Name]:= [Measure Formula]
– Security: read permissions will give you access to the entire database. Use a DAX filter to further restrict the data
– The Tabular model, which contains a GUID in the name, talks to the Workspace Database – a place where all development changes take place
– When creating a new Tabular project, use the Preview and Filter option to restrict the number of rows that are imported into the model. This is particularly useful when dealing with large datasets. A partition will automatically be created for this subset of data. Once the data has been imported, create additional partitions for the rest of the data. The Process All option will bring in the new data
– One can copy data from Excel and paste it in as a new table in Visual Studio. Very nice for prototyping
– One can hide a Security table by applying a DAX filter of =FALSE(). Hiding the table from the client tools does not hide it altogether
– DAX security filter example …security[allowed region], security[login], username(),…
– One can process a database, a table or a partition
– The Process Data option only processes data, calculated columns will not be updated. One will need to Process Recalc afterwards before you can Evaluate (return rows from) the table
– Use Process Add to incrementally load data into new partitions. Use Process Defrag afterwards
– Automatic partitions can be added using AMO
– Use the Extension Manager in Visual Studio to include the DAX Editor – to be made available on CodePlex soon! Great add-in for formatting DAX
– There is also another sweet add-in to script out all the DAX measures into a separate window, all with auto-complete. Cheers MDX!

Cathy Dumas did an excellent job with this session, I’ll definitely go and see her in the future again.

Building the Perfect BI Semantic Model for Power View

– Power View allows for the creating of different views in the same report, the same way you would add a new slide to a PowerPoint deck – very cool feature
– Power View reports can also be created from .rsds connections
– One can perform data mining in BISM by creating a connection to a SSAS source which contains the Mining Model
– In SharePoint, on the .rsds connection properties, check the Set execution context property if Kerberos is not setup. The logged in user’s credentials will then be passed through to the data source
– For numeric attributes, like year, set the aggregation property to Do not summarise. This attribute will then not be aggregated when creating a pivot table report
– Development tip: create a BISM connection to the Workspace database instead of the one that is deployed – speeds up trouble shooting etc.
– Descriptions defined against tables and measures will be shown as hover text in Power View
– Set additional Default Label and Default Image properties on attributes in SQL Server Data Tools. Power View will then know which fields to use when switching between card and chart views

The Power View demo by Carl Rabeler was not great as he had to look down at his script the whole time. Kasper de Jonge was a lot better and gave good insight into the technical bits of BISM.

I can’t wait to start using this technology – hopefully another CTP release will be made available early in the new year…

Read Full Post »

So I’m in Seattle attending my first SQL PASS event and thought I’d quickly share some of the highlights of Day 1.

Opening Keynote

– SQL Denali is officially renamed SQL 2012 and will be released during the first half of next year. I.e. 30 June 2012 🙂
– Project Crescent will be known as Power View. It will be available on most mobile devices and is loaded with quite a few new features compared to CTP3. Further down the line (post the SQL 2012 release) one will be able to take a pic of a movie (for example) and use the Bing imaging service to identify the movie cover and filter the report for that movie!

Amir Netz did an excellent job showing off Power View using a movie sales dataset. I’d love to get hold of it for client demos. My only concern is that Power View is still going to be tightly coupled to SharePoint Enterprise (aka the centre of the Microsoft universe). Small to medium size clients will then not be able to leverage this great reporting tool.

Visual Report Design: Bringing Sexy Back

– Principles of effective design include: form (visual design skills), fit (functional business skills) and function (technical skills)
– See http://www.colourlovers.com/ for palette colours and examples of which colours work well together
– Items to consider for your report design:
+ Can the user glean the information in 5 seconds or less?
+ Know your audience
+ Get a second opinion
+ Maintain balance
+ Use POCs and throw them away
+ Accept feedback and be willing to learn

Remember, the novelty of sexy graphs and gauges will disappear over time. Users will just want to get the relevant information to make the decision.

What does it take to add MDS to your DW?

– Create your MDS model in the web UI and then the entities through the new MDS Excel add-in
– Use Attribute Properties to create a relationship between entities. E.g. Product and Category
– Constrained lists can also easily be created using the data in existing columns. I.e. Creating entities for lookups
– Create derived hierarchies back in the MDS UI. Users can then change the hierarchy setup by dragging and dropping items in the tree view
– Add in any required business rules and validate the data through Excel or the MDS UI
– Annotations can also be added to changed data before it is published back to MDS
– Subscription Views make available the data to other systems that need to consume it. Parent-child hierarchies can also be flattened out using the views
– One can install MDS 2012 and still use SQL 2008 R2 on the database engine side, you don’t need to upgrade the entire SQL environment
– Microsoft believe MDS can scale into the millions of records to easily manage customer master data, for example. I’d like to test this out 🙂

Introduction to Data Mining in SQL Server Analysis Services

– Great introduction by Brian Knight: stating all his credentials but then down playing them at the same time using a split screen
– Data mining skills will get you a salary increase (maybe)
– Clustering is great for fraud detection. It will measure the distance between two points on a scatter chart and determine a likelihood
– See http://www.sqlserverdatamining.com
– The key is to flatten out your dataset. However, when creating a Mining Structure you can use nested tables if there is a relationship defined
– SAS has good data mining algorithms, Microsoft’s ones are about an 80% fit. The clincher, you already have the MS ones and no additional purchase is required

I’m going to give this a whirl on some client data when I’m back at the office. Data mining, although not that well used, can add a lot of insight to the existing data warehouse.

Brian was the best presenter of the day for me.

Read Full Post »