Saturday, December 21, 2013

Will we be faster if we’ll have an extra developer?

This week I was asked a really difficult question: Are we ready sooner if we'll have a new developer?



Before exploring this any further I would like to mention that the question itself tells something about the asker. Instead of assuming that of course adding a new developer will make the team faster, the asker was aware that especially in software teams this isn’t always so. There’s even a risk of the opposite.

Nevertheless, we had to make the decision because we are in the middle of the project that is running out of time. The cost of the delay is quite big and we already have used many other tools for meeting the deadline, including reducing the scope heavily. Before I tell you what our decision was, let me explore the advice I got from the precious Twitter from my precious friends.

1. Ask first the opposite question


First of all, Torbjörn Gyllebring gave this excellent advice (read the second sentence):




I think this a great starting point. If your team would be actually faster by making it smaller, then it obviously won’t be faster by having an extra developer. I’ve been in a team having 13 developers (incl. a tester) at some stage an losing some of them didn’t make us any slower. And it wasn’t about getting rid of the bad developers. For every team there is an optimal size, which isn't "the bigger, the better".

Anyway, getting back to our current project. This advice didn’t help us because we already had a small team and removing someone would probably slow us down.

2. How well is your problem understood?


Now read the first sentence of the previous tweet. It might be that the structure of the problem is such that scaling is unfeasible. This resonates really well with me since couple of weeks earlier the answer would have been definitely "No". Our biggest challenge in the beginning of the project was to understand what we were actually about to do and especially how we should do it. The domain in a high level is quite easy to understand but the details are not. Besides we are heavily dependent on underlying systems, and the integration flows aren’t really obvious.

However, when the question was asked, we already had solved the biggest problems in this area meaning that we knew what kind of code to write. So neither this comment helped giving the answer.

3. What is your current codebase like?


The next good advice was related to the quality of the codebase:



I think that this too is a valuable advice because the quality of the codebase really affects how fast the new team member can start being productive. A crappy codebase can mean that the developer needs months in order to become productive. In a good codebase that can happen within days.

Luckily our codebase is really young. Or to be precise, we are extending an existing codebase but the features we create are completely new and I would say that at least 95% of our code doesn’t touch the existing one. So neither this prevented us taking an extra developer.

4. How good is the potential developer?


One of the advice I got on Twitter was related to the skills of the new developer. It is actually pretty obvious that the skills matter a lot. If the new developer just started coding last year, she probably won’t be very valuable if the other team members are much more experienced. In such a case the team probably requires certain level of quality and guarding that may take the precious time of the experienced developers.

But even though you’d be offered a senior developer, how could you really know her skills? We can read her fancy CV but due to the critical deadline there’s not too much time to spend on interviewing or testing the developer’s skills. What helps in this case is really simple. If you know the person beforehand, you don’t have to spend your time on that. And now we are very close to our decision…

5. Can you have a developer that’s already familiar with the system?


One of the developers in our team suggested that we could borrow a developer from another team close to our project. I really liked the idea because besides we knew his skills, we knew that we don’t have to spend too much time on teaching him. And that was our solution: we’ll be faster if we can have this particular person. An hour later we were told that he will start the next day.

Formula for selecting the answer


If we collect all the advice together, I think we have a decent formula for defining whether an extra developer would make the team faster:

1. Make the sanity check first - would removing one developer make you faster? If yes, then forget the new one.
2. Is your problem understood well enough? If not, don’t try to solve a wrong problem by getting a new developer.
3. What is your current code base like? If you don’t have too much technical debt, you might be lucky.
4. How good is the potential developer? It also helps if you know the person beforehand.
5. Can you have a developer that’s already familiar with the system? If you can, your chances increase.

I think it’s important to notice that all of these questions are somehow related to time. The questions 3-5 in such a way that you need to have more time in order the new developer to make the team faster. The questions 1-2 so that if you first solve some other problems, maybe later you can have the benefits of an extra developer.

What if I still cannot answer the question?


It’s easy to admit that the formula above isn’t very exact. Maybe I should rather use the word guidelines. Anyway, it might be that your problem domain is probably clear enough but you are not sure. Or that the developer probably is good enough but you are not sure whether she’s good enough taking account the time you have. And so on. In that case assuming that your cost of delay is big enough - meaning that wasting money for an extra developer isn’t issue as long as she won’t make the team slower - you probably just have to do what Vasco Duarte suggested:



There are things that you can only find out by trying. On the other hand, you may want to mitigate your risks. If the lost money isn’t the biggest issue, the biggest risk probably is that your team will get slower. This could be caused that the other (fast) team members need to spend time on helping the new developer. One possible solution for this is to throw the new developer into the cold water. If she can swim alone, she probably will be helpful for the team.

How about your team?


So here are my questions for you. If you were asked the same question, what would you answer and why? Would you have an extra developer? Would you make the decision based on the advice above or use some other ways to decide?



PS. I started this blog couple of months ago. Thanks for all the readers. Next year I will continue with the same goal as this year: one blog post per month. Merry Christmas and Happy New Year!

Wednesday, November 20, 2013

Order of programming tasks

Yesterday I had an interesting debate with my colleague while we were pair programming. We were implementing a client for REST API and argued whether we should first implement the happy case or failure scenarios. I voted for the former, he was for the latter.

My colleague was worried that if we don't start with the failure scenarios, there will be a temptation for not doing them at all. I replied that sure it requires discipline from us that we won't stop after the happy case and rush to some other programming tasks. Nevertheless, I saw more value in starting with the happy scenario because that way we could help a third developer who was implementing the orchestration part of the feature and was dependent on our code.

Later I started to think more about this conversation because I felt I couldn't argue my opinions well enough, although we decided to start as I suggested. I then realised that my argument was based on small batch and options thinking. Let's put it this way. If we had started with the failure scenarios, we wouldn't have had anything to share for the third developer before we finish both the failure and happy scenarios. I would call that as a big batch approach. On the other hand, when we did vice versa, it created the team an option to use the finished code and utilise it in the other parts of the system - while knowing that it is not the perfect code yet. This was a small batch approach.

Another kind of theoretical way to think it is from agile software development point of view. In agile software development you should be doing the most important thing next. What is most important depends on what brings most value. In our case the happy case created more value than the failure scenarios.

I still agree with my colleague that there can be a temptation to stop with the happy case and kind of forget doing the boring failure handling. But like I said above, it requires discipline, which is one attribute that can be attached to experienced programmers - who I think we are. One way to be disciplined is to write failure case tasks for the user story, another is to have a proper definition of done for stories in general. But all in all, we shouldn't give up from value just because we are afraid of not being professional enough developers.

Monday, October 21, 2013

NoSQL and #NoEstimates

Vasco Duarte asked #NoEstimatesQuestions on Twitter some time ago and collected them to his blog (part Ipart II). One of the questions I asked was: Is there some analogy between SQL/NoSQL and Estimates/#NoEstimates? Vasco gave a short reply and promised to get back to the subject in a later post.

I forgot the whole question but remembered it again last week when Antti Sulanto wrote a post where he tells why he doesn't like #NoEstimates. Which reminded me of Ron Jeffries' excellent post about NoEstimates movement. What's similar for both of the posts is that they seem to be a bit worried about the extreme tone of the word No.

I see it a bit differently. I see it somewhat in the same way as the No in NoSQL. All of you who have read about NoSQL are aware how for example Twitter cannot and doesn't have to rely on traditional transactional and relational storage solutions when taking account the huge amount of data they are storing. So NoSQL doesn't mean that you should never use relational databases and SQL. NoSQL just offers superior solutions for certain kind of situations.

In the same way I'm interested to understand better what is software development like when we don't estimate. I'm not looking for software development world where we don't ever estimate in any situation.

But! If I see a case that would be typically solved with estimates, I don't want to start with accepting that as an only or best option. I rather try to understand what could be a better way. Because I am one of those who have seen how estimates have wasted a lot of valuable time without adding much value and how estimates have been used to make decisions when something else was rather needed. Neither I want to say that #NoEstimates is a stupid hashtag because there will always be cases where estimates provide value.

My personal work history contains four projects where our development team didn't estimate the work beforehand but rather focused on the flow and possibly measured the actual throughput and/or lead time. Experiences from those projects encourage me to continue exploring the alternatives to estimates. Without worrying too much about the word No.

Saturday, September 14, 2013

How confident are you with your estimates?

I know you are a busy reader but before going any further I would like you to answer to ten questions. The questions are taken from the book How to Measure Anything: Finding the Value of "Intangibles" in Business by Douglas W. Hubbard. You should be 90% confident with your answers. This means that if you do the test perfectly, you should get 9 correct answers and 1 wrong.


How did it go?  (I appreciate if you write your result to the comments.) When I did the test myself, I got 6 correct answers. When a couple of my colleagues did it, they got 2-5 correct. Two weeks ago I was in ALE 2013 unconference in Bucharest and held an open session there. I asked the questions from a dozen of people. Most of them got 2-4, some of them 5-6 and there was only one person with 7 correct answers.


Too difficult questions?


When people get rather low results from the test, they typically react by saying that the questions were too difficult. This is the key point here. If you don't have enough information, you should not give too narrow range. When you get more information, only then you should make the range narrower.

So how about software development? Have you ever been in a situation where your boss asks you to give a quick "educated guess" about a new feature because there is a management meeting tomorrow where the estimate is needed? And although you don't feel very confident, you say "well, maybe 30 days". And then the boss takes your (educated?) guess with him, goes to the meeting, and let's the guess to become a fact that is used to make an important business decision.

I think this is exactly such a situation where you don't have enough information to give an estimate like that. You are after all giving a range of 30-30, i.e. a single point! Instead, if you are a calibrated person (1), you could say: "with 90% confidence, I think it is 10-300 days". Or if you are not, like me, you should probably just say: "with this amount of information, I don't know".


Unprofessional answers?


When I gave that advice in the ALE13 open session, two people asked me that wouldn't it be unprofessional to give an answer like that? I replied to them that it would be actually quite the opposite.

I think it is unprofessional to pretend that you have information when you don't. I think it is actually unethical. It is much more professional to be honest and say: "I don't know".


How to narrow the range?


If you have a smart boss, he probably wants to know how to make the initial range narrower. The first thing is to spend a bit more time to think about the problem you are about to solve. You can probably identify a couple of parts that are especially uncertain. Maybe you can write a prototype or a technical spike for them?

Another approach is to ask what is the most important thing we need to solve? If your initial estimate for the whole thing was 10-300 days, maybe your calibrated guess for the most important part is 2-20 days. It is still a wide range but perhaps small enough so that you can just do it and see how it goes. And most importantly, to learn by doing.

You will actually learn many things. You may learn technically. You will understand the domain better. And you can start measuring your progress so that you can stop guessing and start forecasting.

Or you may learn that you need to build something else that you were actually supposed to build. If that happens, what is the value of estimating the whole project beforehand then? Another important aspect is that the smaller slices of work you can create, the less need you have for estimating their size.


Cost and value


If we go back to the initial question about the project size, it may actually be the wrong question for another reason as well. When we want to figure out whether to start a new project, we tend to focus on estimating the project cost because that is the "easy" part. We don't try to estimate the value because that would be too difficult.

In his book Douglas Hubbard criticizes such a behaviour. As the book title says, he claims that anything can be measured, so also the value of a project. For doing it he provides you with many tools. I really recommend you to read the book and find out more.


PS. In Finland they are planning to build a health record system that would cost 1.2-1.8 billion Euros. I wonder if that estimate has a 90% confidence interval..?


(1) Calibrated person is such that gets regularly 9/10 right when she is asked to answer with 90% confidence interval. When Douglas Hubbard has a measuring challenge, he trains the key people so that they become calibrated. One tool for that is to answer to other similar questions. After the key people are calibrated, they can give reliable initial estimates for the questions that are created based on the measuring challenge. Based on the initial estimates Hubbard defines with the aid of statistical tools what part of the challenge should be measured more in order to provide most valuable additional information with the least effort.

Monday, September 2, 2013

Marbles and batch size

Last weekend our family visited the model railway museum in Kouvola, Finland. It is a very fascinating place with all the small trains travelling in a miniature village. However, from a certain point of view the most interesting thing in the museum is a marble race game. My sons played with it and tried different ways to add new marbles. After I had watched that for a while I thought that we should record an experiment. Here's a short video I made about it:



So why share this? Well, I guess all of you who are familiar with things like batch size, flow, and kanban know what I mean. For those of you who aren't here are a couple of questions you could ask yourself:
  1. What are the marbles in your organization?
  2. Do you know what your batch size is?
  3. Are the marbles in your organization rolling or do they get stuck too often? What might be the reason for that?

Saturday, August 24, 2013

From hour estimates gradually to #NoEstimates

This is my first blog post, welcome! I would like to share with you my real-life experience related to #NoEstimates. I've been following the discussion on Twitter this year but learned most of what I know by reading blogs. So maybe you can learn something from my post as well. The post tells a story about a team that was using plenty of time on estimation but gradually moved towards #NoEstimates. Let's see how that happened.

Initial estimation method: hour estimates


The development team I joined as a Scrum Master had a long history. However, during the previous year about half of the team had changed. I was told how it at least used to be a great Scrum team but when talking with anyone in the team, it was clear that they were struggling. I don't know if it had been like that a year or two earlier but at least at that time they were far from well-performing.

During the first sprint with them I just observed how they worked. Among many other things I noticed:

  • The sprint planning meeting was neither very efficient nor effective. The group of 12 people was divided into two teams and both teams spent typically 5-7 hours on sprint planning.
  • During the sprint planning meeting the Scrum Masters were using an electronic tool that contained user stories with initial story point estimates. The team was discussing the stories and the Scrum Master wrote tasks based on the discussion. For each task the team provided an hour estimate using the planning poker method.
  • The atmosphere during the meetings was, how should I put it, not very energized. They weren't events where the team would eagerly try to find the best possible solution to the problem at hand. I noticed e.g. how some people were so bored and frustrated that every once in a while they would just ignor the discussion and spent time on Facebook or similar.
  • One of the Scrum Masters' tasks was to print out the user stories and the tasks from the electronic tool. During the sprint the developers of course noticed new things to do but you couldn't see that on the Scrum boards since nobody wanted to write new tasks into the tool and print them out. It was thus difficult to follow the actual progress during the sprint.
  • Even though the planning was very detailed, the teams weren't able to finish the user stories during the sprint. The other team finished half of the stories completely while the other one finished none. The teams created burndown charts based on tasks and their hour estimates. This meant that if they had 80% of the original tasks done, they had a pretty “successful” sprint. It didn't matter if the initial tasks were irrelevant or if none of the stories were completely done.
The first sprint ended with a retrospective where many of the team members pointed out the problems I listed above. The team decided to try out something new.

Transition from hour estimates to story points


The next sprint planning was quite different from the previous ones. We stopped doing hour estimates. We threw away the electronic tool. We stepped away from the pressing meeting rooms and used the team space instead. We didn't try to do all of the work with the whole team but instead did some of it in groups of 2-3 people. And although we had printed the user stories, we wrote the tasks by hand.

First we checked the product backlog and picked the top four user stories and discussed them briefly all together. Then we split the team into four small groups and each group was responsible for providing the tasks for the story. As a detail I remember how someone suggested that we should write a couple of tasks together so that everyone would see what it is like to write them, how to pick them from the discussion. This was an interesting detail since I realized afterwards how the “Scrum Master uses the tool” approach had made them passive also in this sense. After 15 minutes or so we gathered together and each group explained what they had done. Others made comments and asked some questions. Based on these the team fine-tuned the tasks.

The same was repeated until finally we had about ten stories planned. The only thing we were missing were the estimates. I asked the team which one of the stories is the smallest. It was easy to find and that story got one story point. Then I took a random story and asked if it was the same size and if not, how many times bigger. That way we got story point estimates for each of the stories.

The team had been using story points also before but they were based on hours with some formula that I don't recall. Since we now had a new meaning for one story point, we didn't have comparable data from the previous sprints. Instead I asked the team: do you think that you can completely finish all the stories during the sprint? Although they were not very confident, they decided to commit to all of them. So were we done. We had spent about three hours, went for lunch, and started writing some code.

Story points era


One of the changes we made was that we stopped drawing burndown charts based on tasks. Instead, we used completely finished stories. Below you can see how it looked in the new sprint #1.



This was something I had witnessed before. It goes like this: In the beginning everybody can choose what they start to work on. Since it is the most efficient way (right?), almost everyone picks a story of their own. In the middle of the sprint none of the stories are completely done. At the end of the sprint magic may or may not happen. In this particular case they got pretty close but from an earlier team I remember how there were five developers, five user stories, all of the stories work in progress, and only one of them completely finished on the last day of the sprint.

So during the first sprints we had a lot more to improve than just make the sprint planning more effective and efficient. One thing was to start working more in pairs or small groups. Another important thing was that the developers tried to get something for the tester sooner instead of waiting for the whole story to be coded. This way the user stories were ready sooner. It also made the tester happier since he didn't have to wait until the end of the sprint to get something new for testing.

However, that wasn't enough. The team wasn't able to reach their goal during the first couple of sprints. At the end of one sprint planning one of the team members asked how many points the team had completed in the previous sprint. I said about 30. Then he asked from the team: If we have managed to do 30, why should we commit to 40 again? A good question, I would say. So they decided to drop a couple of stories away.

Little by little the team learned to commit to a reasonable amount of work and also get the work completely done in the sprint. After 2-3 months the charts started to look like this (we changed from burndown to burnup at some stage):


An important thing that the team learned was that if they commit to stories that are too big, there is a high risk that they won't be able to finish them. The team created a rule that if a user story is estimated to be more than five points, they have to split it into smaller pieces. I believe this was a crucial lesson towards the next step.

S/M/L estimating


The duration of a typical planning session had dropped from 5-7 hours to 2 hours or even less. The team was able to finish the sprint goal almost every time. But I still felt that we could do even better.

I remember that sometimes we were using too much energy on discussing if a story was one or two points. I even remember a case when time was spent arguing whether a story was zero or one points.

We also discussed if it made sense to estimate bugs and include finished bugs in the burnup chart. It felt like cheating: what if you finish a 3-point story in sprint n, find three bugs in sprint n+1, and fix 1+1+1 points in sprint n+2? From the commitment perspective (how much we'll be able to do) it made sense but from the value perspective it didn't.

There were also situations that we couldn't know beforehand whether we were able to start working on a certain story since it was blocked by an external party. Or we didn't know exactly what we needed to do since we first needed to find that out by doing another story. However, since those were important tasks that should be done if possible, we reserved space for them in the sprint backlog: “These are the stories we have selected and besides them we have 3 points for these unknown stories.”


Since all of that felt kind of like waste, I proposed the next step for the team. Let's drop the story points and instead use sizes S, M, and L. S means 1-3 old points, M means 5, and L is bigger than that. If a story was S, it required no further discussion about its size. If it was M, it was a warning that further discussion might be needed - can we really complete the story or could we perhaps split it? If it was L, we had to split it. The sprint commitment was made based on the gut feeling using the question: from 1 to 5, how confident are you that we will be able to complete all the stories we have chosen?

An interesting thing was that we never actually used those sizes. The team had learned to split stories so small that all of them were of size S. At that time our typical process was such that we had enough stories on the whiteboard waiting for the next sprint. We spent 10 minutes on them on the last day of the sprint. We started the next sprint with about an hour-long sprint planning meeting where we made sure that the whole team knew what we were going to do and checked if there was something important that was missing from the backlog. The developers wrote the tasks when they picked a story and rewrote them whenever needed. It felt like we were getting closer and closer to a nice flow.

#NoEstimates


At some stage we decided to split the team into two. The reason for this was that even though there was one code base, there were two clearly distinct businesses using it. This caused a major challenge of how to prioritize stories. So one component team became two feature teams, each business having its own.


The team I was in decided to take the next step towards #NoEstimates, although at that time I hadn't heard about such. We decided not to have sprints anymore but instead every time choose the next most important thing. Of course this meant that we tried to keep the amount of work in progress as low as possible, although we didn't have explicit WIP limits written on our board. It was important to have as small stories as possible but we didn't spend any time on estimating them (well, intuitively perhaps). We were just thinking if this story made sense and should and could we split it. Sometimes we noticed during the development that it made sense to split the story and then we just wrote a new story.


Instead of sprint plannings we started to have weekly meetings having all the relevant people from this business area in the company. That included of course the development team and the so called business people. We didn't have a Product Owner anymore since there was no need for such. In the weekly meetings we as a group talked about the big picture, checked what was going on, and decided together what we should do next. We used another whiteboard that was scaled to an upper level than what the development team had.

Instead of calculating velocity based on story points we started to count finished stories per week. Below you can see how our throughput statistics looked during the first 20 weeks. Notice especially the last eleven weeks: every week 2 or 3 finished stories. When the throughput is so stable, why would you need any size estimates?

 
It was a week 20 or so when we realized that we needed to do a major refactoring in order to meet a certain important business need. It was the first time in this new team when we needed to do estimation of some kind. Our approach was the following: Try to understand what needs to be done. Split the work into user stories or similar. Count the stories. Use the statistics to forecast what the probabilities to have this done before date X are or when all of the stories would be done with a decent certainty.

We were a bit skeptical about how the business owner would deal with our non-traditional approach of forecasting when the project would be ready and in production instead of estimating in man-days. Luckily we were fortunate to work with a smart guy and after asking a couple of questions he just said: ok, go for it.

What really happened was that the required changes were in production pretty much when we expected them to be. However, we didn't finish all of the dozen stories we had planned initially. Instead we realized that half of them could be done later and replaced those with other, more important tasks. The throughput was as expected but the content was something different, more valuable.

#estwaste and euros


Before the #NoEstimates hashtag I remember that at least Vasco Duarte was using #estwaste in his tweets. I like the word waste since it is an easy word to throw out on many occasions but let me provide you with some numbers that should make the word more concrete in this case.

If you read the whole story, you noticed that we started with sprint planning sessions that lasted about 6 hours and in the end we didn't have them at all. If we assume that there are 22 sprints per year and the team has an average of ten members, it means 1320 saved hours per year. I really don't know what the average hourly cost of the team members was but let's pick two numbers: 50 or 100 EUR/hour. On a yearly level this means savings of 66,000 or 132,000 euros. Besides that you probably noticed that we didn't need the Product Owner anymore. So you can add the cost of one manager above that.


I guess you are now saying that I forgot the value part of those sprint plannings or that I forgot the cost of the one-hour weekly meeting. Well, I didn't. First of all, the old sprint plannings produced very little or even negative value. Surely the developers discussed the upcoming work there but I would say that the discussion wasn't very useful. One of the purposes of the plannings was to provide visibility for the Product Owner but it was hard to see such an effect. And the usage of technical tool caused problems during the sprints since the team was having difficulties using the new information they learned while working. With negative value I refer to the drop in people's motivation.

Instead, the weekly meetings really produced value. They helped us to share information very efficiently and make useful prioritization decisions. So the cost calculations above really refer to the waste (=no value added), although they even ignore things like opportunity cost, cost of delay, and so on.

Lessons learned


Let me choose the two most important #NoEstimates lessons that I learned during this journey. The first one is that at least in this kind of context the #NoEstimates approach is perfectly valid and can bring huge improvements for the organization. With “this kind of context” I mean an ongoing product development. Unfortunately I don't have experience on making business decisions before starting to develop a large-scale product. I would love to read your post about that topic.

The second one is that if you start from the situation described above, you cannot just jump to #NoEstimates. Instead, you have to find your own path and take small and sometimes bigger steps towards it. Vasco Duarte claims that story points are harmful. I understand what he means but that statement depends on the context as well. In this post I described how we gradually moved from hour estimates to story points, to S/M/L sizes, and finally to #NoEstimates. The story points helped the team to learn how big stories cause problems and split the stories into smaller ones. I feel it was a necessary step to take.


I think that working without estimates requires that the team has a certain maturity level. If your team doesn't have that yet, you need to work hard (smart) in order to get there and enjoy the benefits of #NoEstimates. That is what we did and I recommend it for you as well.