An evaporating cloud real world example

The Evaporating Cloud is a Thinking Process developed by Eli Goldratt. It forms part of his Theory of Constraints. The cloud helps you find “win-win” solutions to situations where on the surface it appears you have two totally opposite points of view. The cloud tells you that there really isn’t a choice involved at all, it’s only a matter of examining the assumptions that sit behind your points of view.

In this example I’m going to start with the objective, and then discover the conflict, but I have also been using the technique the other way around, and started with the conflict and worked my way towards the objective, it seems effective in both formats.

The Objective: Getting started

The objective in this example is simple. It is critical that once we engage with a new client that we start building a long term strategic partnership. So starting with a blank sheet of paper I will write our objective on the left.

objectives

The Requirements: Logical cause and effect

The key to getting started with this thinking process is to build statements of the form “In order to … we must”. Taking our objective of a long term relationship, the first statement is obvious.

In order to build a long term strategic partnership we must produce desired outcomes long term.

Of course it is impossible to build a long term relationship without getting through the short term so our second requirement can be written in a similar way to the first. The green arrows show the logical cause and effect relationship between the statements.

requirements

Pre-requisites: Self-similar Iteration

So far so good. No conflict in these two statements. In order to achieve the desired outcome we need to focus on both the short and long term. So now we just repeat the process, except this time our requirement has become the first part of our statement.

In order to produce desired outcomes long term we must challenge short term decisions.

With the second statement taking the same basic format. You can see the iterative (self-similar) nature of the thinking process and why it suits an agile mind-set. Why don’t you try reading this cloud using the “In order to … we must …” format now?

pre-requisites

Conflict: Houston we have a problem

To achieve our original objective of building a long term strategic partnership with our new client, we must challenge some of their short term wants and needs. However by doing this, we jeopardise the objective because we don’t deliver the desired short term outcomes, and if we fail at the first challenge, no matter how good our intentions we won’t get to the long term bit where things tend to get interesting.

The convention is to signify this conflict on the diagram with a double headed arrow.conflict 

With the cloud drawn out in this visual way the two parties can now begin a constructive dialogue about the assumptions that site behind each of the green arrows. This can be as simple as asking “Why?”.

In this example if we look at the bottom left-hand arrow, we can start setting a new clients expectations about the kinds of outcomes they might see in the short term. If we can work together to build outcomes that can deliver both short and long term then we start to see a win-win situation emerge.

Sitting behind the top right-hand arrow, we can have a useful discussion about some of the challenging behaviour a client might see in the early days. Perhaps we can talk about techniques or processes for dealing with that conflict as it arises perhaps using a tool like this.

Toolkit: Pocket Knife

I like to think of the evaporating cloud as the pocket knife of the wider set of tools outlined in the Goldratt’s Thinking Process. It is a fast and effective way to get to the heart of almost any problem or decision and for those who sometimes fear conflict it allows a way to begin a dialogue and avoid a heated debate.

I’m an advocate of using pencil and paper, it is much easier to refine and iterate quickly, and I haven’t done a cloud yet where I got the wording correct immediately. I’m sure refinement is all part of the process. However for those fans of sticky notes and permanent markers it also works well in that format, but be prepared to screw up a view sticky notes along the way.

References

Cookie law – how does it affect Sitecore

The recent cookie law in the EU (http://www.cookielaw.org/) introduces a challenge for sites which run off Sitecore if the interpretation of the rules is all cookies should be blocked.

Throughout the lifecycle of a Sitecore app, the cms introduces cookies into the users browser – some would be expected eg login / auth cookies however some relate to environment considerations e.g. language.


Cookies

The image above shows the cookies you would expect from an out the box installation without DMS or OMS running. OMS and DMS make use of cookies for persisting users information – this is stored in the ‘SC_ANALYTICS_GLOBAL_COOKIE’ and ‘SC_ANALYTICS_SESSION_COOKIE’ cookies, both key for Sitecore Analytics to function correctly.

The use of session can be configured at application or page level (http://support.microsoft.com/kb/306996). One thing to note, if you are implementing functionality which relies on session information eg shopping carts, removing the session cookie may be impractical. The cms relies on session working – see the information below about splitting the authoring and presentation servers.

Sitecore introduces a cookie based on the current site name suffixed with #lang. This is used for storing the user’s language selection and is introduced to the browsers cookies during the httpRequestBegin pipeline – initiated by the languageResolver.

For more information on Sitecore Sites see http://sdn.sitecore.net/Articles/Administration/Configuring%20Multiple%20Sites.aspx

To ensure the language cookie isn’t added you can remove the languageResolver from the httpRequestBegin pipeline. NOTE removing the languageResolver is risky! Only do it if you are happy with the following:

  • You have one language in use on the site in question and this language is defined on the <site> entry in the <sites> config
  • You don’t need to use preview or page editor within the cms. If you do, you need to keep the languageResolver hence you need different configs for the cms and presentation sites – this can be achieved via separating out the cms to be a different application from the front end site.

In my opinion removing the language cookie and session cookie is pretty heavy handed. In the majority of sites both these would be mission critical – that said, if the law is interpreted in certain ways removing them may be the only option.

What’s wrong with our Daily Scrum?


We all appreciate that Stand-ups or Daily Scrums are integral for teams to synchronize their work and highlight impediments for the day, but are yours as effective as they could be?
Classically, a team would stand around the team board and answer the standard 3 Scrum questions in turn:

  • What has been accomplished since the last meeting?
  • What will be done before the next meeting?
  • What obstacles are in the way?

The Scrum Guide mentions:

‘Every day, the Development Team should be able to explain to the Product Owner and Scrum Master how it intends to work together as a self-organizing team to accomplish the goal and create the anticipated Increment in the remainder of the Sprint.’

Unfortunately, we were finding that some team members couldn’t do this. They could mention what they were working on but not how they were working together to complete the increment of work. We were hearing status updates like: ‘Yesterday I worked on database optimization, I haven’t finished it yet so I’ll be working on it today. No obstacles’. This may be entirely true, but there’s no value in an update like this. If your Daily Scrums contain this information, you might as well not do them at all. We wanted the team to feel real value in the Daily Scrum rather than just status updates. While I agree that if interpreted correctly the 3 questions in the Scrum guide can give the team the information they need, I believe in some teams there needs to be an element of coaching about getting the right information from the stand-up.

We’ve been trialling a new format that seems to returning positive results and encourages teams to get value out of Daily Scrums. Let me explain the team’s thought process:

Stage 1 – Team Goals

The Agenda

  • What has been accomplished since the last meeting?
  • What will be done before the next meeting?
  • What obstacles are in the way?
  • What are our team goals for today?
Why we did it
We found that the team struggled with long planning horizons around Sprint Goals. After discussion with the team we introduced Daily goals to great effect. At the end of a Daily Scrum, the team would choose goals for the day, write them on a post-it note on the team board. We found that decreasing the forecast horizon led to daily success celebrations and gave the team focus on what the major objective(s) were for that day.

Stage 2 – Moving goals

The Agenda

  • What has been accomplished since the last meeting?
  • What are our team goals for today?
  • What will be done before the next meeting?
  • What obstacles are in the way?
Why we did it
After the initial success of the goals, we then found that the team’s work for the day was changing after deciding on the goals for day. We briefly considered moving the goals to the beginning, but then we would have a different issue of generating goals for the day before the team inspected the work from the previous day. It seemed the sensible place to define goals for the day was after reviewing how yesterday went.

Stage 3 – Separate the team discussion from the round-robin

The Agenda

Everybody answers in turn:

  • What has been accomplished since the last meeting?

Team discussion:

  • What are our team goals for today?

Everybody answers in turn:

  • What will be done before the next meeting?
  • What obstacles are in the way?
Why we did it
Although it seemed the sensible place to define goals for the day was after reviewing how yesterday went, this introduced a new problem. During the round-robin other team members would interject on the ‘team goals’ question and goals would get continuously created and redefined during the Daily Scrum as each team member mentioned what had been accomplished since the last meeting. This was very time consuming so we decided to split the questions into 2 round-robins and a time-boxed team discussion.

Stage 4 – Adding in a new question

The Agenda

Everybody answers in turn:

  • What has been accomplished since the last meeting?

Team discussion

  • Did we meet the goals we set yesterday?
  • What are our team goals for today?

Everybody answers in turn:

  • What will be done before the next meeting?
  • What obstacles are in the way?
Why we did it
This worked really well, as the whole team took it turns to talk about what they achieved the previous day. This naturally resulted in a conversation about whether they had met their goals from the previous day. Given the usefulness of the ‘Why’ if the team didn’t meet the previous day’s goals we decided to add in a reminder question to think about whether we actually met them or not. We found this had the side-effect of increased ownership over the goals as they were being reviewed by the team every day.

Stage 5 – Expanding to include Sprint Impediments.

The Agenda

Everybody answers in turn:

  • What has been accomplished since the last meeting?

Team discussion

  • Did we meet the goals we set yesterday?
  • Are we confident that we’ll clear the Sprint Backlog by the end of the Sprint?
  • What are our team goals for today?

Everybody answers in turn:

  • What will be done before the next meeting?
  • What obstacles are in the way?
Why we did it
This worked really well. However we were finding that impediments were being raised around the day’s work rather than the sprint’s work. We tried adding in the question ‘Are we confident that we’ll clear the Sprint Backlog by the end of the Sprint?’. Anything less than 100% confidence on this resulted in a discussion around what was in the way. This highlighted new impediments that we didn’t even know existed in the context of the Sprint rather than the day’s work. It also had the happy side-effect of getting the team to look at the burndown and if it was off-track, think about why. We chose to place it just before defining goals for the day as they may change as a result of answering this question.

Stage 6 – Terminology changes

The Agenda

Everybody answers in turn:

  • How did I contribute to the goal(s) we set yesterday?

Team discussion

  • Did we meet the goal(s) we set yesterday?
  • Are we confident that we’ll clear the Sprint Backlog by the end of the Sprint?
  • What should be the goal(s) for today?

Everybody answers in turn:

  • How am I going to contribute to today’s goal(s)?
  • What could stop me successfully meeting today’s goal(s)?
Why we did it
We now had a process that was working really well. The team had a buzz around them, celebrating small successes and we were getting visibility across the full Sprint. We changed some of the terminology of the questions to better describe the information that the team felt we needed. This helped the team put it into context.

Stage 7 – Define timebox

We decided to timebox the full meeting to 20 minutes instead of the usual 15. The team agreed that the extra 5 minutes was worth the extra value that a new agenda may bring.

Is this Scrum?

I know a lot of ScrumMasters will be looking at this proposed template thinking ‘This isn’t Scrum, it’ll never work’ but just hear me out for a minute. The Scrum guide mentions the following objectives of the Daily Scrum:

  • Inspect the work since the last daily scrum
  • Assess how progress is trending toward completing the work in the Sprint Backlog
  • Forecast the work that could be done before the next Daily Scrum.
  • Synchronize activities and create a plan for the next 24 hours

Inspect the work since the last daily scrum

We cover this mainly in the initial round robin. We found that team members could easily remember the objectives for the previous day (they were written on the wall!) which reminded them what work they did to meet the goal.

Assess how progress is trending toward completing the work in the Sprint Backlog

The first and second parts of the team discussion concentrates on assessing progress on both the daily and sprint horizons. It has the advantage of being assessed by the team rather than individuals.

Forecast the work that could be done before the next Daily Scrum.

The last part of the team discussion covers forecasting the work for the next day. As it’s a team discussion, what to cover becomes a joint decision. This is especially useful if you have junior members of the team which may need some guidance and advice on what work to do approach next.

Synchronize activities and create a plan for the next 24 hours

This will probably start in the team discussion but is then reflected on in the last round-robin. This confirms to the team what each individual is going to do to meet the goals and helps highlight any further impediments.

Summary

So looking at the above objectives, we still meet all the intended outcomes of the meeting. In fact, I believe we actually get better information using a format like this combined with Daily Goals than we do with a 3 question round-robin. We tried it, I believe it works. I’d love to hear feedback from other teams.

Focus on being productive not busy

We recently looked at our task board and it looked something like this:

Our Task Board

It was pretty easy to see where the bottleneck was in this process – a number of items had been assigned to the SM/PO (in the UAT column) which were awaiting new requirements and discussions with the client to clear these through. They simply hadn’t yet had time to contact the client in regard to these issues.

So the question we asked ourselves was ‘why was the Scrum Master and PO the bottleneck?’ The dev to PO/SM ratio was small, in any normal situation the PO and SM would have ample time to do everything necessary. Were they taking on too much?

(At this point it might be worth mentioning that I am the Scrum Master on this team!)

We discussed the issues and felt that this was a reflection of some wider issues of how I and the PO were working. The first steps were to record the issue so that we could analyse what in fact we were doing during our working day, so we decided to write our own timesheets for a few days to enable us to review the time we were taking on activities. These were the results of my timesheet:

7.30 Review email
8.00 check bugtracker
8.30 Prioritise UAT issues
9.00 Updating Support tasks on board and CRM/burndown
9.30 Reviewing invoice figures
10.00 Discussion with Directors
10.30 Daily stand-up
11.00 Chasing UAT feedback from client
11.30 Email
12.00 Clearing to-do-list items   (impediments/queries)
12.30 Lunch
13.00 Email
13.30 Clearing to-do-list items  (impediments/queries)
14.00 UAT queries with client
14.30 Live site issue – team and client   discussions
15.00 Directors around the board/updating   stats
15.30 A3 thinking – SM and PO discussing   how to improve our working efficiency
16.00 Email
16.30 Filling out timesheet
17.00 Call with client
17.30 Reviewing CRM (our internal accounts   system)
18.00 Email

We analysed this and found a number of steps that could be taken:

  • The team could take on more responsibility for talking to the client – reviewing UAT feedback with the client as a team. We actually got the client on site to work along-side the team. This worked a treat.
  • Triaging UAT feedback – we identified that I was spending a lot of my time replicating bugs raised by the client. The tester in our team had capacity to do this triaging and the only real reason for me doing it myself was a hark-back to the days before we did scrum. Our tester was happy to take this on.
  • Treating email differently – One thing I found whilst doing this task was that I was being constantly interrupted by my email. Although I was aware that multi-tasking isn’t effective, and that we should avoid it, I wasn’t pro-actively doing this. Emails would pop-up on my screen and I would want to tackle it as soon as possible. Instead of moving through from one email to another. So being conscious of this issue means that I can make sure I close down one email at a time and tell myself off if I find a number of open emails on my desktop at one time.

Now I wouldn’t want you to think this timesheet was a completely accurate representation of what I actually participated in. As a Scrum Master it is very hard to time-box tasks, most of the day is filled with conversations and team interactions and there is nothing wrong in this. But what I did find was that the task of having to write down my activities and actually allocating time against this allowed me to really think about how I was working.

I would suggest that this is a very good tool for anybody looking to improve how they are working on a day-to-day basis. It allows you to focus the mind and importantly increases your awareness of the issues. I will certainly be carrying out this exercise again once we have implemented these changes. My aim is to know that I have a decent amount of time each day/week to be able to review our working practices and make improvement.

Dynamic Placeholder Keys in Sitecore

Sitecore have spent a long time designing a CMS which provides great flexibility for both developers and content editors. If you have designed your renderings and placeholders in an intelligent way a seemly complex page can become a playground for content editors. Despite Sitecore’s huge improvements throughout the years one fundamental problem however still seems to remain – or should I say, I have not yet found a good out of the box solution! Perhaps Sitecore themselves can prove me wrong.

Consider a simple page with a container rendering (5050Container.ascx for example) which splits the page in two:

<%@ Control Language="c#" AutoEventWireup="true"  %>
 <div>
 <sc:Placeholder ID="Placeholder1" runat="server" Key="halfleft" />
 </div>
 <div>
 <sc:Placeholder ID="Placeholder2" runat="server" Key="halfright" />
 </div>

We could put control’s into the left and right placeholders via the Page Editor or indeed via Layout Details.

Control A’s placeholder path would represent /main/halfleft.
Control B’s placeholder path would represent /main/halfright.

Now consider we would like to add a second row using the same container – We hit a problem when we try and add controls into the placeholders of this second container because the placeholder paths resolve to the first container.

One solution is to create a second container (e.g. 5050Container2.ascx) with different keys:

<%@ Control Language="c#" AutoEventWireup="true"  %>
 <div>
 <sc:Placeholder ID="Placeholder1" runat="server" Key="halfleft2" />
 </div>
 <div>
 <sc:Placeholder ID="Placeholder2" runat="server" Key="halfright2" />
 </div>

This is quite untidy, and if content editors want to add many rows where do you stop, Sitecore could become full of multiple versions of the same container rendering.

A better approach would involve just one rendering which we would want repeated, again for example our 5050Container.ascx. We would want to automatically suffix placeholder keys each time a repeating container is added to the page. For example, if we had two 5050Containers’s on the page the keys for the first container would dynamically be generated as halfleft1, halfright1 and then for the second container halfleft2, halfright2 and so on.

Before I started implementing this approach I had a quick browse on Google and came across a very good article by Nick (@techphoria414) http://www.techphoria414.com/Blog/2011/August/Dynamic_Placeholder_Keys_Prototype. Nick had a similar idea which I’ve used to implement my approach.

The concept is quite simple, create our own implementation of the sc:Placeholder control and let this decide whether the container should be repeated, if so change the key.

Containers need to be repeated if:

–          They sit in the same placeholder.
–          They are the same control.

I have pasted the code at the bottom of this post, but please bear in mind it is a prototype. Add the class to your solution and add its namespace/assembly to the <controls section of your web.config:

<add tagPrefix="tcuk" namespace="Tcuk.Cms" assembly="Tcuk.Cms" />

Then it’s the simple task of replacing your sc:placeholders on containers you want to repeat. i.e.:

<%@ Control Language="c#" AutoEventWireup="true"  %>
 <div>
 <tcuk:DynamicKeyPlaceholder ID="Placeholder1" runat="server" Key="halfleft" />
 </div>

You will notice in the page editor if you add multiple repeating containers the keys will be dynamically updated:

And this is reflected in the Layout Details:

Continue reading

WDD: What developers discuss

Once a week here at True Clarity our developers get together to share and discuss interesting things they’ve been doing in the past sprint. At the end of each meeting someone writes up some of the more useful things that were discussed and sends it round to the rest of the company. This usually prompts some interesting discussion and serves as an excellent method of knowledge sharing.

Sat at home thinking about what I could blog about I decided that these meetings provided regular content for some interesting blog posts and that opening the discussions up to the wider development community might be a good way of sharing knowledge even further.

Hopefully my colleagues will also think this is a good idea and turn this into a regular blog post – sharing the most interesting thing we discussed each week. I’m going to set the ball rolling with a blog about utilising responsive design techniques that was discussed a few weeks back. I have already found some of this stuff really useful and envisage it becoming even more so as we are asked to produce websites that are user friendly across an ever increasing range of browsers and devices.

So without further ado… here is the first (of many hopefully) WDD blog from True Clarity….

Responsive Design: Conditional CSS and jQuery producing fluid, responsive designs.

As more and more people are accessing websites through mobile devices many clients now include these devices on the core list of browsers for us to develop for. It is already a challenge to get a site looking consistent across all the browsers on traditional devices. When we throw browsers on mobile and tablet devices into the mix we often get a headache.

More important perhaps than cross browser consistency is the usability of websites on mobile and tablet devices. Touch screen controls and smaller screens can render a website almost unusable if the design is not suited to these devices. One approach would be to have a consistent design that works well on all platforms, but this might compromise design on traditional computers just to accommodate mobiles and tablets.

Fortunately, CSS3 has added flexibility to the media type added in CSS2.1 which, when combined with some JavaScript techniques allow us to identify the device and/or the screen resolution that the user is viewing the site on. Once we can identify this, we can apply whatever style rules we like to make our designs flexible and responsive.

So what things can we identify and how can we do that? The rest of this blog will cover some of the most useful techniques available before providing a list of further reading.

Specifying different style rules for small screens

CSS 2.1 introduced the media type to allow conditional rules depending upon the output. This means that to specify a stylesheet for a printable version of a site you simply add a media parameter to a link tag to include a print only stylesheet:

<link rel=”stylesheet” type=”text/css” href=”print.css”  media=”print” />

CSS3 takes this concept further to allow specific device and screen resolutions to be targeted. To add a stylesheet specifically for smaller screen resolutions we add a condition using the max-width property:

<link rel=”stylesheet” media=”screen and (max-width: 600px)” href=”smallScreen.css” />

Adding this rule would mean that reducing the size of the browser window to less than 600px would apply styles found in smallScreen.css. These conditions can even be combined to say “use this stylesheet when screen size is between x and y”:

<link rel=”stylesheet” media=”screen and (min-width: 480px) and (max-width: 700px)” href=”mediumScreen.css” />

The examples so far have shown how stylesheets can be included conditionally but we can also include media queries within our CSS files themselves.

@media all and (max-width: 600px){

.class1{color: #000;}

.class2{background-color: #666;}

}

A great example of some of these techniques in practice can be found at cxpartners.co.uk. If you resize the page in the browser window (and look at the source by inspecting and element) you will see the design change to adapt to the new resolution. Looking at the CSS styles applied to an element you will be able to see the media queries being activated to show and hide different content. Simple application of these rules has created a version of the site that is much more suited to small screen resolutions.

We can even target specific devices by using the min-device-width property – this would stop the rule being applied for a desktop user resizing their browser window.

Media queries are a css3 feature that is not supported in versions of IE below 9. Fortunately, there is a JavaScript library that will make media queries work in browsers that don’t support css3. You can download this library from here.

JavaScript solutions:

As always, there is more than one approach to implementing responsive designs. JavaScript or jQuery can also be used to detect browser width and change stylesheets accordingly. However, it also gives us some other techniques that can be used to target specific browsers and devices.

The jQuery.support function can be used to test a lot of design techniques that are not supported in certain browsers. jQuery.support.opacity for example will return false for IE 6 – 8. Using jQuery we could then adapt our designs accordingly.

The JavaScript below accesses the navigator object and looks to match the iPad part of it. This will be true if the user is accessing the site through an iPad:

function isiPad() {

return (navigator.platform.indexOf(“iPad”) != -1);

}

We can now either redirect an iPad user to a specific mobile/tablet version of the site or more simply, use jQuery to add a class to the <body> tag that is specific to the iPad.

$(document).ready(function(){

if(isiPad)

$(‘body’).addClass(‘iPad’);

})

Summary

We have covered ways of identifying devices and screen resolutions and placing conditions that will allow different behaviours depending on how a user is accessing your site. These are some of the techniques we can use to apply responsive designs. What we have not covered are some of the principles of designing for small screens and touchscreen devices. Scott Hanselman has written several blog posts about how he has used responsive design on his site:

http://www.hanselman.com/blog/EasyStepsToAMobilefriendlyResponsiveDesignWithAnEmbeddedYouTubeVideoAndAFluidResize.aspx, http://www.hanselman.com/blog/CreateAGreatMobileExperienceForYourWebsiteTodayPlease.aspxhttp://www.hanselman.com/blog/SupportingHighdpiPixeldenseRetinaDisplaysLikeIPhonesOrTheIPad3WithCSSOrIMG.aspx

More examples of media query properties and usages such as detecting landscape page orientation can be found here and here. This article has a great overview of Responsive Design principles and further techniques related to resizing of images.

These techniques allow us as developers to implement the design that is best suited for the device the site is being viewed on. Hopefully unresponsive and unsuitable designs should become a thing of the past.

It’s just like ‘Grand Designs’

I was sat watching ‘Grand Designs’ with my wife a while ago (It’s not a programme I would normally watch, but I was looking to earn some brownie points at the time) and I noticed just how agile the couple were in terms of creating their perfect home. For those who don’t know, ‘Grand Designs’ is a programme where couples design and build their dream homes (some of which are truly amazing). The couple on this particular episode had some basic requirements for their new house in terms of size and location but that was about it. They had bought some land in a nice location, received the designs from the architect and started building. This seemed to surprise the presenter of the programme, who was astounded that the couple hadn’t worked out all of the tiny details for how they wanted their dream house to look and what it would contain upfront. He made his opinions clear to the couple that he thought they were mad to start with the actual building of the house, as they would surely end up wasting loads of time and money.

What actually happened astonished both him and my wife (who had agreed with the presenter’s views), but will come as no surprise to those of you who use agile on a day to day basis. As the house went up, the couple worked through each stage of the process, always staying just one step ahead of the builders. They made sure that they had narrowed down the requirements for the next part of the build, just in time for when the builders got there. They were always on hand, ready to provide guidance or offer feedback on certain aspects of the build. The build was completed on schedule (almost unheard of on this programme it would seem) and the house met the couple’s requirements and expectations.

This may not have been software development, but we can still see the agile process at work here with a Stakeholder/Product Owner prioritising their backlog in terms of importance and providing continuous feedback. They only worried about the small details when they were sufficiently close to being worked on by the builders, as opposed to getting all the specifications laid out at the beginning.

It just goes to show that provided that you apply it properly, the agile methodology comes up trumps every time, even outside of software development.

All I need to do now is convince my wife to live in a house that looks like a castle and we can get started on building our ‘Grand Design’!

Kid’s stuff

There’s no doubt that asking the right sort of questions [not just in sprint retrospectives] can elicit the kind of information which empowers teams to reflect on what they’ve just said, what has happened and then, ultimately, lead to changes in behaviour, processes, etc.

So why is it then that children find it so natural to do just this i.e ask the kind of questions that really force you to think about what it is you’ve just said?  Well it’s obvious isn’t it, they’re simple beings and so can only come up with simple questions right?  If you haven’t got kids yourself then maybe it’s harder to remember the kind of things they say or that you used to say when you were a young ‘un.  Here’s an example:

Son (who’s 4):  “So why was the rabbit afraid of the dark?”

Me:  “Er, well, because he was on his own and wanted to get home.”

(I should probably say at this stage that we were reading a story)

Son:  “Why?”

Me:  “Well, because he had lost his sister and she was looking for him and at night, when it’s dark, lots of animals get scared?

Son:  “Why?”

Me:  “Well, they’re like people;  you sometimes get scared when it’s dark don’t you?”

Son:  “Yes.  Why?”

I won’t bore you with the full conversation but it certainly made me think about how powerful simple questions can be.

The “five whys” technique in retrospective meetings is just one of many tried and tested techniques to generate insights into any data gathered from looking at what happened in the sprint.   In my short career as a ScrumMaster, I haven’t yet been on the receiving end of the five whys questioning so it was interesting for me to feel the effects of a technique about which I had read and tried out several times on others.  Ok, it may have been a four-year-old, but it still worked!

Kids don’t have the inhibitions which we all pick up along the way and can think and act in a very pure way without even trying.  Our own self-awareness, confidence and experiences just get in the way sometimes both as the person asking the questions and the person answering.

It’s not very practical to suggest that we all start acting like a four-year-old but we could all learn something from them!

Dangers of Parameter Sniffing in MSSQL

Here we’ll consider quite a specific area of SQL optimisation, that of stored procedures that accept a number of parameters. While there’s a wealth of information available online about optimisation, in this situation certain factors come into play to make things more interesting.

So you’ve found that there’s a problem in your SQL solution – its CPU usage is high or spiking, and queries are sometimes taking a long time to complete. You’ve run SQL Server Profiler to look at the Duration of “SP:Completed” events, and sure enough some of the execution times are big.
But sometimes they’re small.
And it varies for calls to the same stored procedure… perhaps by a factor of hundreds!
What’s going on?!

It’s possible that SQL Server performing parameter sniffing is causing your stored procedures to run slow. It’s also possible that in attempting to fix things, you’ll do some parameter sniffing of your own and make things worse!

So what is parameter sniffing?

First, if you don’t already, you need to understand a little about execution plans. These are created and used internally by SQL Server, and essentially tell SQL Server how to execute a specific query (e.g. which index to use at a certain point, whether to use an index seek or a full table scan, etc.).
There are two crucial points that affect us:

  • The execution plan is typically created once and then re-used for subsequent calls to the same query.
  • When optimising the execution plan, the parameters that are passed in for that particular call are taken into account. This is parameter sniffing.

So parameter sniffing isn’t necessarily a bad thing – SQL Server does it with all the best intentions, and in most circumstances it helps to execute queries more efficiently.
But by now you’ve probably realised how problems can arise:

SQL Server optimises the execution plan for a stored procedure call, and that call executes nice and fast. However, a new call to the same stored procedure – but with different parameters – will re-use the execution plan from before. But now, the old execution plan could be de-optimised for the new parameters (sometimes very significantly), and the second call executes very slowly.

Note that there’s not necessarily anything more taxing about the parameters in the second call. You’ll likely find that if the calls were made in reverse order, it would still be the first one executed that performs well, and the second one executed that performs badly.
This is why the situation can be very unpredictable – chance differences in which parameters are used to build the execution plan can result in huge differences in execution time for subsequent queries.

What can we do about it?

There are a few different approaches that can be taken, and which one(s) to use will need to be considered for your specific situation.

  • Split your stored procedure into many, so that each can be more specialised.
    This will reduce the number of parameters you need to pass in, and reduce the variance in the execution paths of your stored procedure. Execution plans will still be created, but will be less likely to perform badly for different parameter sets.
  • Tell SQL Server exactly how to optimise the execution plans.
    If you have a good understanding of both the nature of your data and the variety of parameters that your stored procedure will be called with (and only if!), you can use the query hint OPTIMIZE FOR. You can therefore tailor the execution plan to suit an ‘average’ parameter set. Note that this is only available for SQL 2005 onwards.
  • Tell SQL Server to create a new execution plan every time.
    Use WITH RECOMPILE on the call to the stored procedure or on its definition. This will cause execution plans to be created for every call, and so never re-used. You can expect some extra overhead for the plan creation, but it’s unlikely to be significant compared to the duration of the queries themselves.
    If using SQL 2005 onwards, you can also use the RECOMPILE query hint to instruct SQL Server to recompile just a single statement within your stored procedure. Used carefully, this can yield the best results, as it avoids re-use of troublesome parts of the execution plan while allowing other parts to be re-used effectively.
  • Prevent SQL Server from successfully sniffing your parameters.
    At the start of your stored procedure, declare a variable for each parameter, set the values directly from the parameters, and then use the variables instead of the parameters within your query. This sidesteps SQL Server’s parameter sniffing and forces it to create execution plans that cater for unknown parameters, which generally provides good results.

You’ll probably want to try a couple of different techniques to see which works best in your situation.

Is that the end of the story?

In addition to increasing the stability of your database by avoiding problems caused by parameter sniffing, it’s always a good idea to reduce the execution time of stored procedures across the board where possible. There’s a lot of literature available about this so I won’t go into details, but will leave you with a piece of advice for when tuning parameterised stored procedures:

Make sure you test your changes with representative data and representative parameters.

Use SQL Server Profiler if necessary to monitor the variety of parameters being used on your live environment.
If you fail to take the variance of parameters into account, you’ll essentially be doing your own parameter sniffing – that is, optimising (in this case the stored procedure itself rather than the execution plan) for a specific parameter set, and potentially de-optimising for others.

Query hints and keywords can stop SQL Server from optimising badly, but no such measures exist for a poorly-informed database developer…

Some useful articles

http://www.codeproject.com/Articles/21371/SQL-Server-Profiler-Step-by-Step

http://pratchev.blogspot.co.uk/2007/08/parameter-sniffing.html

http://www.codeproject.com/Articles/9990/SQL-Tuning-Tutorial-Understanding-a-Database-Execu

Setting up Mercurial 2.2 on IIS7

Jeremy Skinner has a good guide to setting up Mercurial 1.4/1.5 on IIS7, there are a few updates to the process for Mercurial 2.2 which are detailed below.

These instructions are based on a Windows 2008 R2 Datacentre install, with IIS7 already installed and setup. The server is 64bit, but I’m using the 32 bit versions of all the installers.

  1. Install Python 2.6.6. Using the MSI and default settings.
  2. Install Mercurial 2.2.1 (32 bit exe).
  3. Install Mercurial 2.2.1 (32 bit py2.6).Ensure you are using the py2.6 version, marked as “recommended for hgweb setups”. This installs the mercurial source as python scripts.
  4. Ensure CGI and Basic Authentication are installed for IIS. You can check this in Server Manager. If either are listed as “Not Installed” click “Add Role Services”. Click “Next”, then “Install” to complete the installation.

    IIS-Roles.jpg
  5. Create a new site in IIS. This could be a virtual directory, but I prefer to have a separate site running off an arbitrary port (8989 in this case). Remember to add any additional firewall rules.
  6. As the site will be using Basic Authentication, it’s also a good idea to force it to run under SSL, even if it is only using a self signed certificate.
  7. Change the authentication options for the site to disable “Anonymous Authentication” and enable “Basic Authentication”. This will require all visitors to login before being able to use repositories.
  8. Enable Python for IIS. Select “Handler Mappings” for your site, and then “Add Script Map”.Enter the following values:
    Request path: *.cgi
    Executable: C:\Python26\python.exe -u “%s”
    Name:Python

    Click OK, When prompted select “Yes”.

    HandlerMappings.jpg

     

  9. Copy the “Templates” folder, and the “library.zip” file from “C:\Program Files (x86)\Mercurial” to the website root.
  10. Get a copy of the hgweb.cgi file from the Mercurial Repository, and copy it to the website root.
  11. Create a new file named “hgweb.config”, leave this empty for now. Update the config variable in “hgweb.cgi” to point to this new file, for example:config = “C:\Mercurial\Website\hgweb.config”
  12. Visit https://your-site.co.uk:8989/hgweb.cgi, you should be prompted to login (with windows credentials from the server) and then see a page titled “Mercurial Repositories” with no repositories listed.
  13. Make a folder to contain your repositories, and make a new repository using the hg command line:hg init C:\Mercurial\Repositories\test
  14. Edit the “hgweb.config” file you created earlier, entering the following content:
    [collections]
    C:\Mercurial\Repositories = C:\Mercurial\Repositories
    [web]
    allow_push = *
    push_ssl = true
    N.B. The collections file path is case sensitive
  15. Reload the page, you should now see your “test” repository listed

The repository is now ready to use! As a self signed SSL certificate is used you will need to ensure you enable “insecure” mode when using hg/TortoiseHg.