SQL PASS 2010 Summit – Recap

The sessions are over, and the post-cons are done; the vendor parties are but a memory, and the #sqlkaraoke music has died.  Another PASS summit is in the books, one billed to be the “best PASS summit ever”, and I’d have to concur.

The Best PASS Ever

Perhaps it started with the entertaining (albeit ill-advised) Tina Turner impersonator during the opening keynote on day one, belting out “Simply The Best”.  We were informed that this summit was the largest ever, with over 3,700 registrants and many thousands more tuned in to catch the live stream of the keynotes, another first for PASS.  The organization continues to make good strides toward building community at the summit, embracing Twitter and sharing many of the unofficial meetups on the website.  Despite the unrest around the leadership (more on that shortly), PASS continues to improve the summit every year.

On a personal note, this was certainly the best PASS summit ever for me.  Because of my involvement with the SQL Server community, I’ve had the pleasure to get to know scores of folks in this community, many of whom I consider to be friends.  Because of these connections I’ve made, the trip to Seattle last week was like a homecoming for me.  I felt a lot like Norm walking into Cheers, where everybody knows your name.  I wasn’t selected to deliver a session this year, but I was invited to host a Birds of a Feather table as well as participate in the Ask the Experts booth, and was asked to join the blogger table to “live blog” and tweet throughout each of the keynote addresses.  I was honored to participate in all 3 of these, but the last one was particularly enjoyable.

Product Developments

Even though this year’s PASS summit does not coincide with an imminent version release, there are a number of significant announcements regarding new and upgraded features surrounding SQL Server Denali, the next major version of SQL Server.  Applicable to most everyone is the release of Denali CTP1, which was made available for public download earlier this week (all PASS attendees received the bits on a DVD as well).  I’ll not try to detail all of the changes here, but a summary of the new developments includes:

Business intelligence: Project Crescent was announced, which is intended to be a thin, easy-to-use self service reporting tool to supplement the existing business intelligence stack.  Crescent will leverage the Business Intelligence Semantic Model (BISM), which is a promising but still yet immature architecture for lightweight BI applications.  I’ll likely have one or more blog posts coming up on this in the future.

To further the changes on the BI stack, Denali is expected to usher in the most significant change to Microsoft ETL tools since 2005.  Some of the changes coming down the pike for SQL Server Integration Services include the deprecation of package configurations in lieu of new package parameters, a new server deployment model, an improved design experience including easily DIFFable packages, and – at long last – design-time undo and redo capability.  Soon I’ll be working on some articles, and hopefully a couple of presentations, on the changes in CTP1.

Database engine: Two new developments, notably the columnstore index and Project “Atlanta”, look to be very promising.  Although not ready yet in CTP1, the columnstore index is a new feature in the database engine allowing the creation of indexes on a column-centric, rather than a row-centric, manner.  Think of this as pivoting the way that indexes are written, and then compressing the index at the column level.  This is the same concept that drives the desktop version of PowerPivot, allowing users to manipulate millions of rows of data on the client with virtually no lag.  This new feature is aimed mostly at relational datawarehouse applications, since the addition of a columnstore index to a table will prevent inserting or updating rows in that table (at least for now).  Project “Atlanta” is a new cloud-based troubleshooting and support feature which can help eliminate some of the mystery behind hard-to-find problems in your environment.

Shaping PASS – Engaging the Leadership

Those who follow PASS happenings, even peripherally, learned of the significant controversy surrounding the recent board elections.  I won’t restate the story here, but in a nutshell, the community has been overwhelmingly dissatisfied with the makeup of the final slate of candidates sent to voters at large during the past two elections.  I count myself among those concerned with this process (see my previous blog posts on the topic for my thoughts), but to the credit of the PASS board, they again made themselves available for commentary – and no small measure of criticism – during an open Q&A session on Thursday evening.  Last year’s Q&A event was poorly attended, due at least in part to a scheduling snafu and a lack of publicity.  Conversely, this year’s session was very well attended: I didn’t count those in the audience, but there must have been at least 50 people in attendance, and collectively it was a very vocal but polite group.  With respect to the election, Kendal Van Dyke addressed the board on the topic of transparency, imploring the group to make public all of their individual votes for any future.  Sadly, several board members still resist this level of transparency.  I spoke to a couple of these board members after the Q&A to address my concern with their unwillingness to consider a model of complete transparency with respect to voting; although I made the points I hoped to make, I didn’t feel that I made much progress toward convincing them that this level of transparency is the right thing to do.  I’ll be following up with these board members to get their updated feedback on the other side of Friday’s board meeting, and I’ll have a post later to detail any progress.

Another governance development this week was the announcement of an elections review committee.  Headed up by former board member Joe Webb, this appointed group will serve in an advisory capacity to review the processes surrounding the vetting and election of new board members.  Understandably this adds another layer of administration, but in my opinion is a necessary step given the failure of PASS to adequately represent the community during the last two elections.

Networking, Networking, Networking

I can’t say this enough – the real value of events such as the PASS summit is the ability to personally engage with other similarly minded technical professionals.  The official content (pre/post conference seminars, community and spotlight sessions, and keynotes) can all be purchased on DVD for significantly less than it costs to pay your way into the summit, not to mention the significant travel costs.  When one comes to the summit, a whole other world of personal connectivity is revealed.  I would have a hard time enumerating all of the people I’ve come to know because of my involvement with PASS, SQL Saturday, and other professional outlets.  This summit allowed me to meet in person several folks that I knew online, including:

Of course, I keep telling my own networking success story from last’s year’s summit, where I met (through my networking contacts) a person who would eventually lead me to a new job that I love.

So, adios PASS Summit 2010.  Now I’m going to ride this “conference high” for a few weeks.

Un-SQL Friday: Personal Branding – Don’t Bother

My friend Jen McCown of MidnightDBA fame issued a challenge last week called “Un-SQL Friday”.  The inaugural topic is on branding.  Although I’ve written a little about personal branding over the years, I don’t pretend to be an expert, but I have learned a thing or two that works for me.  As I mentioned in a comment on Brent Ozar’s blog post on this topic, I believe that branding is a highly individual process and must be tailored for the person, their goals, and the environment they are in.

Again, since I’m not an expert on this topic, take the following with a grain of salt.  And before you read the following, be assured that I’m a strong proponent for effective personal branding for the upwardly mobile technical professional.  However, I’m to play devil’s advocate here and tell you why you shouldn’t bother with branding.

Stick with me, I’m going somewhere with this…

Personal Branding: The Counterexample

I have an acquaintance who also works in IT, in a different segment of the market.  He’s been working as a technical professional for a number of years, longer than I have, so he knows his way around and has build up a considerable amount of knowledge specific to his role.  From what I’ve learned about his work ethic, he’s reasonably dependable and honest.  He’s a highly analytical person with adequate communication skills.

And when it comes to personal branding, my advice to him would be simple: don’t bother.

You see, this person is one of "those" IT people, the stereotypical basement geeks who can’t stand to deal with users people in general.  What little writing he does is rarely technical; rather, it usually involves a rant against end users, software vendors, company management, or society in general.  In person and in writing, he sends a clear don’t-bother-me message.  Further, despite his strong technical aptitude, he’s resistant to learning new technologies.  His negative attitude, not his natural ability or experience, is the overwhelming theme of his professional self.

Personal branding is not about creating a new virtual persona to make yourself appear to be something other than you are.  Rather, it’s a megaphone through which you can project further the message you’re already sending.  If you’re ambitious, capable, and positive, then by all means, broadcast it!  If, on the other hand, you’re like the former colleague I described above, any honest personal branding is more of a detriment than an asset.

Tongue in Cheek?

A bit. It’s probably true that most people of the professional caliber described above don’t spent a lot of time reading blogs for personal edification (particularly not this measly corner of the intertoobz), so this message may fall on deaf ears.  Still, the point remains: although most professionals can yield some benefit from personal branding, there are a few who simply shouldn’t bother.

SQL PASS Summit 2010 – Keynote Day 3

Today is the last official day of the SQL PASS 2010 summit, which also means the last day for live blogging during the keynote.  This has been a lot of fun, and I’m looking forward to doing it again next year.

Today’s ceremonies will include a keynote by Dr. David Dewitt.  Before that, PASS board member Rick Heigis gives the daily briefing on the current goings-on at PASS, including a review of the most recently elected board members and a recognition of outgoing board member Lynda Rab.

Next up, Rick talks about the upcoming SQL Rally, announcing the winners of the community vote for the preconference sessions.  You can register now for preconference sessions delivered by the Pragmatic Works crew, Plamen Ratchev, Kevin Kline, and Grant Fritchey.  Registration is now open for the Rally and the preconferences for a discounted bundle rate.  We’re already looking forward to the next PASS summit.  Registration is available now for $995, or you can attend the summit plus two full days of preconference material for $1295.


Dr. David Dewitt is up next to discuss SQL query optimization.  Instantly he’s a crowd favorite, bringing in a few timely puns before getting started on the main content.  He begins by giving a high level overview of the query optimizer and describing why the building of an effective execution plan still remains challenging after 30+ years of RDBMS development.  To demonstrate his theory, he’s using an analogy similar to the Nexflix model which should be at least somewhat familiar to most database professionals.  He gets rolling quickly, describing two different query plans and reviewing the process through which the query optimizer selects the best plan.  By the way, the entire slide deck for this presentation will be available on the Microsoft Jim Gray Systems Lab group Facebook page.  There was a mention of a Q&A later through which attendees can submit questions to Dr. Dewitt – email AskDrDeWitt@sqlpass.org to get your question added to the queue.

This is really good stuff, and he’s moving quite fast through the material.  My only concern is that there’s probably a significant number of people in this room who are completely lost.  This material is relatively specific and deep for a keynote; this topic could have easily been moved to a spotlight session (or two consecutive community sessions).  Nonetheless, there are some excellent concepts here, and I expect that everyone is getting at least something out of this.  Folks that spend their careers doing performance tuning are drooling on themselves right now.

Dr. Dewitt has mentioned a couple of times that query optimization has significant room for error.  It’s a value-based decision, and as smart as the optimizer is, there will be some optimization choices that are simply wrong.  How does this happen?  Simply put, lots of variables.  Imagine your drive to work every day (note that this is my analogy, not part of the presentation).  You have two possible routes, A and B, and you know from having taken both paths that route B is usually faster than A.  On your way, you hear on the radio that route B is partially blocked by a traffic jam, so you opt for route A.  Is that better?  Maybe, but what if everyone heard the same report and jumped onto route A, causing a bottleneck on that path.  What if route A is also blocked by an accident just before you arrive?  What if the accident on B is cleared up and traffic starts flowing normally after you change routes?  A trivial example, yes, but you can see how just a few variables can have a significant effect on the efficiency of a given path.

Dr. Dewitt said that he job of the QO is NOT to find the best plan: It’s to find a good plan fast.  It’s the Nike approach: Just Do It.  What if the query optimizer gets it wrong?  Yes, it happens.  But the logic remains that a plan that isn’t the most optimal is still a pretty good plan.

Statistics are discussed throughout the presentation.  Dr. Dewitt points out that current statistics are essential for building a good query plan.  “If you don’t update your statistics, don’t blame us”, says he.  Good advice, since I suspect that a lot of folks don’t spent a lot of time focusing on this (and I’m among them).

At the end of a brain-busting presentation, many in the crowd (including everyone at the blogging table) gave Dr. Dewitt a well deserved standing ovation.  Dr. Dewitt now takes questions submitted via e-mail during the address.  I really like this interactive approach – it helps to engage the audience and keep the content grounded and relevant.


Today was the last keynote of the conference, and by far it was the most informative.  We had over an hour of pure technical information, which, judging by the feedback in the room and on blogs and Twitter, is strongly preferred to the marketing-laden message delivered yesterday.  Hopefully we’ll see more of the former in coming years.  Look for a wrap-up post in the next couple of days, where I’ll review what I learned and outline my accomplishments at this year’s Summit.

Networking, Newbies, and SQLPASS

This year, PASS is taking an extra step to help identify first time summit attendees.  Most attendees are given ribbons for various recognitions, including Speaker, MVP, Chapter Leader, and other accolades.  This year, attendees who are attending the PASS summit for the first time are recognized with a yellow “First Timer" ribbon on their badge to allow those of us who are veterans of the summit to quickly identify those folks.  At the end of last year’s summit, I blogged about the need to identify first time attendees so that the rest of us could make a special effort to help them avoid becoming this guy (or gal).

I can see this already working great.  I’ve spoken to a number of first timers after seeing them sitting alone at sessions and after hours events, and I see other alumni doing the same.  I observed a couple of first timers introducing themselves to one another this morning, sort of a kindred “newbie” spirit union.  Hopefully this will help first time attendees to feel more welcome and engaged, and will turn them into serial attendees.

SQL PASS Summit 2010, Keynote Day 2

The second day of the SQL PASS summit has already started in unique fashion.  Today is #sqlkilt day, and there are a dozen or more guys and gals dressed in fine Scottish attire.  Bill Graziano, executive VP of finance for PASS, was playfully booed when he took the stage for having worn pants on kilt day.  Bill took the time to recognize a couple of volunteers, Wendy Pastrick and Lori Edwards, for their continued efforts in the community.

Just announced to the group is an election review committee which will be headed up by my friend Joe Webb and staffed by a number of others from within the community.  This was just a brief mention of what appears to be a large effort, but Joe blogged about this in more detail here.


Quentin Clark, General Manager of Database Engines for Microsoft, is delivering today’s keynote on SQL Server vNext (Denali).  As was announced yesterday, the first CTP of this product was released to the public yesterday and is available for download now.  A few notable facts about this version:

  • Up to 15,000 partitions for VLDBs
  • Columnar index (described yesterday)

So far, we’ve got lots of marketing info but not a single demo.  Time to move along!


Clark announces a new initiative named AlwaysOn, and introduces Gopal Ashok, technical product manager for SQL Server product management.  Ashok – at long last – shows us our first demo of the keynote.  AlwaysOn appears to be a HADR service through which, by way of a wizard, SQL Server DBAs can quickly configure secondary replica databases.  Any of the replicas can be marked as readable, allowing read-only access for reporting or other functions not related to OLTP, and even more impressive, backups can be offloaded to the replicas.  A dashboard is also provided to show the status of the AlwaysOn infrastructure.  Admittedly I’m no expert on HADR, but this does look very promising.

Miscellaneous changes

Alone on the stage again, Clark announces a few other changes, including brief mentions of T-SQL enhancements:

  • Sequence generator
  • Support for paging
  • Enhanced error handling


FileTable is a new feature based on FILESTREAM technology.  Rohan Kumar, a principal group program manager, describes FileTable as an engine that will manage and store files directly within SQL Server.  Files are still stored in the file system (local or UNC), and can be viewed and managed completely within SQL Server.  He demonstrates by copying in a number of files to the file table using the Windows command prompt, and then browsing these files (and opening a video media file) from within the FileTable interface in SQL Server.  Aaron Bertrand, sitting next to me at the blogger table, wondered aloud about how backups would be managed when using FileTable, which was not addressed during the demo.


Next up is Don Box, Microsoft distinguished engineer for SQL Server, to discuss Project Juneau.  Box uses the Northwind database for the demo, which drew a few playful jeers, and he follows it up by SELECT * which was similarly received.  Juneau is a tools enhancement, which makes the management of object metadata easier and more intuitive.  Juneau reaches into both SQL Server Management Studio and Visual Studio, and appears to have seamless integration in each.  A few of the examples shown include the ability to script out a database to a series of .sql files (one per object), and when browsing each of these files you can view dependencies on the target object.  I suspect that this is just the tip of the iceberg; time to go exploring.

Business Intelligence stack

Jeff Bernhardt, Principal Product Manager, comes forward to describe the changes on the BI side.  He describes Data Quality Services, a user tool which allows data stewards to browse, update, and manage MDS data.  Bernhardt continues the “story” by showing the new SSIS interface.  Most notable here is the ability to Undo a change in the designer, a feature that is long overdue but certainly welcome by those of us who make our living in the SSIS design surface.  Another significant addition is a dependency and lineage checker, which can show relationships between integration services packages and other assets such as data files.  This feature promises to be very, very cool.  This was the shortest demo so far – is that it for BI?

That’s a Wrap

I had hoped for more demo time on the BI (and SSIS) side, but it’s enough to get excited about for now.  They’re handing out the CTP1 disks to attendees as they leave the room, so I’m sure we’ll see a flood of Denali chatter, bug reports, etc.  More tomorrow!

SQL PASS 2010 Keynote, Day 1

The day 1 keynote was kicked off by a Tina Turner impersonator doing a rendition of “Simply the Best”.  Rushabh Mehta followed up with a Tina wig of his own, though he fortunately had a much less revealing outfit.  Rushabh starts off with an overview of the PASS activities for the year.  PASS has touched over 65,000 individuals this year through various outlets including the Summit, SQL Saturday, 24HOP, and local and virtual chapters.  By 2015, PASS is looking to deliver 1 million hours of content, have a worldwide membership of 250,000 throughout 5 global regions.  This year’s SQL PASS summit has 3807 registered attendees representing 48 countries.  In addition, 4,500 people registered to view the keynote streams.  Simon Sabin is hosting a group meeting to watch the keynotes as a group.

This year’s summit features 191 speakers, 44 of which are MVPs, delivering 168 community sessions over 3 days. In addition, there are 18 pre- and post-conference seminars.  DVD content is available for all community sessions; for the first time, the pre- and post-con seminars are also available for sale.  The expo hall was sold out for the first time ever.

SQL Clinic offers the opportunity to visit with the SQL CAT team directly.  Microsoft presence at SQL PASS is over 400 strong.

Some random attendee just won a new Dell laptop.  He happened to pick the chair with the winning envelope taped to the underside.

Just before the keynote, a video presentation takes us through the history of SQL Server.  Facial hair is both abundant and unruly.  The trip through the versions of this product is nostalgic, but more importantly, is a good reminder of how far this product has come in a relatively short period of time.

Keynote – Ted Kummert

Ted Kummert takes the stage, giving the requisite 5 minute feel-good prep talk before diving into the meat and potatoes of the presentation.  Notable mentions include the SQL Server team’s focus on three core concepts which helped to make the product what it is today:

  • Ease of use
  • Not just a database product, it’s an information platform
  • Capability to handle high-volume, mission-critical systems

Jesse Fountain joined Kummert during the keynote to briefly describe and demonstrate the new Parallel Datawarehouse (PDW) product, which will be available as an appliance solution sometime in December.  Fountain demonstrated by processing an 800 billion (yes, with a B) query in under 30 seconds, retrieving the results into PowerPivot as we watched.  Impressive!

Afterward, Microsoft customer Paulo Resende from Global Wealth Information Management took the stage to describe his experience with PDW.  I wish we could have heard more about his story.  Next, Dave Mariani from Yahoo joins Ted Kummert on stage.  Yahoo has to process 1.2 terabytes of data every day to run their advertising services, and they turned to SQL Server Analysis Services to solve this problem.  There are 3.5 billion events processed every day, creating a 12TB cube that is loaded continuously.  Query response times against this source are around 10 seconds (!!).

Project “Atlanta”

Microsoft Critical Advantage Program is announced today, aimed at mission critical applications.  In addition, Project “Atlanta” is currently in beta, which is intended to identify and head off problems before they become problems.  Bob Ward was invited up to discuss Atlanta in greater detail.  He starts off by displaying the ubiquitous error message “Cannot generate SSPI context”, and showed how this product can help track down the cause of this error by uploading error and configuration data to a cloud service for real-time troubleshooting without having to pick up the phone for support.  Atlanta can even show the history of such changes of a service account for the SQL Server service.  The beta is available now at http://www.microsoftatlanta.com/.

SQL Azure

Next up was the cloud discussion.  Characteristics of cloud computing include:

  • Self managed
  • Elastic scale
  • Agile and familiar

CTP previews are coming up shortly for Web Admin, Reporting, and Data Sync.  Windows Azure Marketplace, DataMarket is available now, which blends both private data and public domain data.  Adam Wilson, PM for the Azure product, demonstrates SQL Azure using the Contoso bike company.  First, we see a report built against cloud data, which can be deployed to an ASP.net application as a report part.  Next we jump to a weather data provider, retrieving meteorological to analyze bike sales versus weather trends.

SQL Server 2011 Denali

Kummert announces the release of SQL Server 2011 “Denali”, which is available for download today (a DVD will be provided to all SQLPASS attendees tomorrow).  He mentions an expanded BI reach – hopefully more details to come during tomorrow’s keynote.  Important mentions include significant changes to the SSIS platform, whereby some of the current standalone services will be server-managed.  Changes to data lineage was also mentioned as a teaser for the Wednesday keynote.

Project “Crescent” is mentioned as a web reporting and visualization tool.  Amir Netz, distinguished engineer for SQL Server BI team, takes the stage to show data retrieval and visualizations in PowerPivot.  “In PowerPivot, working with 100 million rows is as easy as working with 100,000 rows”, says he.  He demonstrates the execution of several queries performing a table scan of in-memory data, to the tune of 2 billion reads per, but the output shows that the queries return almost immediately.  His calculations indicate that he’s procession a trillion (yes, trillion) rows a minute.  Wow!  Netz’s presentation was by far the most interesting and entertaining so far, mixing incredible statistics with comedy and a few glitches in the presentation to keep the crowd engaged.  We see some very cool data visualizations using movie ratings as a basis for reporting.  This actually makes reporting look fun.


For tomorrow’s keynote, we have the promise of more detailed discussion and demos of the changes for SSIS in Denali, which are expected to be the most sweeping upgrades yet of that product.