A Lean Start-up Approach to Data Science

By: Guest

4, March, 2019

Categories:

Artificial Intelligence - Data -

  •  
  •  
  •  
  •  
  •  
  •  

By Ben Dias, Head of Data Science at Royal Mail

When Gabriel Straub first introduced me to Eric Ries’ book titled “The Lean Start-up”, I instantly knew that it should be applicable to the world of Data Science. However, translating the approach from the domain of a start-up business to that of a Data Science team or project was by no means obvious. In fact, it took me several years, several experiments, several discussions with several people, and several re-reads of the book before I figured out a way to apply the Lean Start-up approach to Data Science. I was then looking for a greenfield opportunity to try it out on a new Data Science team, when by serendipity Royal Mail approached me with an offer I couldn’t refuse. Just over a year and a half later, having successfully implemented my vision of running my Data Science team at Royal Mail as a Lean start-up, and having my ideas and thoughts well received by a number of different audiences, I felt the time is now right to share it with the world.

I am writing this article as a practical guide to applying the Lean Start-up approach to Data Science, in the hope that it inspires a new way of running a Data Science team and a new generation of Data Scientists who deliver significantly more return on investment (ROI). Together, we can then transform the hype surrounding Data Science into reality.

Why Should the Lean Start-up approach work for Data Science?

If your business has only standard problems with well-known solutions, you should be using off-the-shelf products instead of investing in a Data Science team. Therefore, by definition, Data Science, if you invest in it, is essentially an entrepreneurial venture. Most often, each business has a unique context requiring a unique set of Data Science solutions. Therefore, as Data Scientists we are most often solving a problem where the solution is not obvious and success is not guaranteed. The similarities between Data Science and a start-up are therefore quite obvious. So, if the Lean Start-up approach helps you get from a new idea to a thriving new business in the shortest possible time using the least amount of resources (or fail fast!), then surely we should be able to use the same approach to get the same outcome from our Data Science projects. This is why I was convinced, from the first time I read the Lean Start-up book, that there must be a way to apply the Lean Start-up approach to Data Science.

The Innovation Accounting Framework

The Innovation Accounting Framework is the driving force of the Lean Start-up approach that gets you from an idea to a thriving new business (or equivalently to a successful Data Science solution) in the shortest possible time using the least amount of resources, or very importantly failing fast!

Figure 1: The Innovation Accounting Framework

The Definition of Done

A key aspect of Lean Start-up approach is to start with the Definition of Done, which is a set of requirements that tell you when to stop developing – they tell you what “Good Enough” is. This helps prevent you from diving in and wasting effort developing solutions that no one wants, and also helps you stop developing any further when the solution is good enough. These are all important aspects of being Lean.

In Data Science in general, your stakeholders will ask you for an algorithm that does X, Y and Z (e.g. an algorithm that forecasts our sales for the next month). The best way to ask your stakeholders for a Definition of Done is to ask them how they would evaluate your algorithm when you have finished developing it – i.e. ask them what good looks like. They will usually use terms like “best” (e.g. next best product recommendation) and “optimum” (e.g. optimum delivery route), or even more vague terms like “most interesting” (e.g. most interesting theme emerging from our social media feed). At that stage it is very important to probe these terms and define exactly what they mean in two key dimensions:

  1. Comparisons: Find out how two potential solutions will be compared by asking: “If I gave you two solutions A & B, how would you tell if A is better than B or vice-versa?”
  2. Good Enough: Find out what they need as opposed to what they want, by asking: “For what and how exactly are you going to use the output?” and ensure the requirements are relevant for the application scenario (e.g. The accuracy required to decide if a patient goes into surgery vs. sending out a direct marketing e-mail are significantly different). And always ask “Why?” (e.g. If they ask for 90% accuracy, ask why? What will the consequence be if it is only 80% or even 70% accurate?).

From my practical experience as a Data Scientist, I can assure you that some of the time your business stakeholders will be able to clearly articulate the Definition of Done, and some of the time they will definitely not be able to do so. In the latter case, it is then very important to first iterate with your stakeholders on the Definition of Done, before doing anything else. In my experience, in such cases the stakeholders are usually easily able to articulate what the Definition of Done isn’t (e.g. what is not good enough or what is missing). Therefore, you then need to identify the quickest/leanest way of providing them with example outputs to critique, which will help tease out their requirements.

Sometimes you can do this as a thought experiment, by talking through some extreme examples with the stakeholders. For example, if you are being asked to optimise the range of items in a store, you can ask your stakeholders “If we optimised your range purely on profit and your entire drinks isle was filled with only the most profitable beer can, would that work? If not, why not?”. You might then tease out, for example, that it is also important to apply a “Customer Needs” constraint, where you need to have at least one product offer to cover a minimum set of key customer needs.

If a thought experiment is not feasible, in order to stay Lean, you must then take the smallest possible set of data representing the smallest possible unit of measure (e.g. in the previous example, one section of an isle in one store) and build the simplest possible model that can generate an extreme output to be critiqued by the business stakeholders. A key part of being Lean here is to go for the extremes, so that there is no doubt that the solution is not what is required so that you can tease out the requirements by asking why it is not good enough.

It is important to remember that your aim at this stage is uncovering the Definition of Done and not building a model or algorithm. Therefore, you should deliberately avoid building a complex model at this stage. If a simple model is not feasible, generate some random outputs or use a Wizard-of-Oz experimental set up (where you hand craft one or two outputs to generate feedback).

In some cases, you will initially receive an incomplete Definition of Done, and so it is important to continue to iterate with your stakeholders until the Definition of Done is complete. For example, given a prediction problem you might be requested to achieve 90% accuracy 80% of the time. This leaves a few questions still unanswered (e.g. Over what time period? Is it an average accuracy? etc.), and requires further iteration. An example of a great Definition of Done for a prediction problem would be a request to achieve an average accuracy of 90% over a minimum coverage of 80% of events per month, between February & November last year.

The Innovation Metric

Once you have figured out what the Definition of Done is, there is one more task to complete before you dive in and do any development work – defining your Innovation Metric. The Innovation Metric is preferably one number (but in some rare cases it needs to be more than one number) that helps you to do two things:

  1. Track your progress as you develop your Data Science solution
  2. Keep your non-technical business stakeholders updated on progress

Because of the second requirement above, the Innovation Metric cannot be a technical measurement, such as the F1 score. Guided by Eric’s advice in his book, I would encourage you to think of your Innovation Metric as people, and choose a metric that refers directly to people.

Assuming we were predicting the time of delivery of parcels, continuing with the Definition of Done provided at the end of the last section (i.e. to achieve an average accuracy of 90% over a minimum coverage of 80% of events per month, between February & November last year) the best Innovation Metric is the minimum coverage (of all of the months between February & November last year with an average accuracy of 90%). This metric easily relates to people – i.e. the minimum percentage of satisfied customers receiving a parcel at the time we predicted it. It is also easy to explain to your stakeholders (as the percentage of customers we would satisfy if we went live with the algorithm) and helps you track your progress (if it is 20% you know you have a long way to go, if it is 75% you know you are almost there, and if it’s 81% you stop immediately and deploy!).

Minimum Viable Product (MVP)

Now that you have figured out what you need to do (i.e. the Definition of Done) and how you will track your progress (i.e. your Innovation Metric), you are ready to start developing your solution. In order to remain Lean, you must start with the simplest possible model and iteratively add complexity only if it is adding value. So, you definitely don’t start with a Deep Neural Network, as it may be over-kill for what you need to do. You always start with a simple linear model, which gives you two advantages. Firstly, you immediately have something that works (i.e. it generates an output, even though the output may not be accurate enough). Secondly, you then have a benchmark that lets you quantify the impact of any additional work and complexity you add afterwards.

This initial simple model is known as a Minimum Viable Product (aka MVP in Agile lingo). The MVP is the simplest possible model that can do what is required, even though it may not achieve the level of performance required (e.g. not as fast or not as accurate as required). If we continue with our example of predicting the time of delivery of a parcel, then we would for example start with a Linear Regression model as our MVP.

Iteratively Improving Your Model

The first thing to do once you have built your MVP is to compute your Innovation Metric for the MVP model. This is your benchmark. You then iteratively work on improving the model until you achieve the Definition of Done. If at any point you make no progress over two consecutive iterations, it is very important to stop and figure out if you should Pivot or Persevere. This is very important in order to remain Lean and to be able to Fail Fast! Pivoting in Data Science could be anything from bringing in a new data set which contains information you didn’t have before, to changing the problem you are trying to solve. In our example of predicting the time of delivery of a parcel, we might need to Pivot and change the problem to predicting the day of delivery instead, depending on a number of things such as the quality of available data. This may still be of value to the business, but possibly for a different use case.

Whatever you do, you should not continue “flogging a dead horse”! If you don’t make progress over two consecutive iterations and can’t identify a suitable Pivot, then you must Fail Fast by killing the project and move onto the next one. Stopping the project early ensures you waste as little resources and time as possible on the failed project, and move quickly to start delivering value on an alternative project.

Being Agile About Being Agile!

In practice from experience I have learnt, and we have proven at Royal Mail that you absolutely must be Agile about being Agile in order to maximise return on effort and to remain Lean. While I’m sure many Data Scientists already follow some form of Agile methodology, most often they will stick to one framework – usually Scrum. Those who have realised that Scrum doesn’t really work for most Data Science projects, will have switched to only use Kanban or something else. My recommendation is to be Agile about being Agile, and use the right tools and frameworks for the right task. In order to do this, you need to learn and experiment with the different tools and frameworks available.

In order to help you get started, let’s look at two of the most commonly used Agile frameworks – Scrum and Kanban. Scrum is the most popular of the two, and is based on breaking the work down into smaller chunks that you can complete iteratively over several fixed-length Sprints. A Sprint is an appropriate time period you choose, such that at the end of each Sprint you have something to show your stakeholders. Scrum is really effective when you know what you need to do and how you could do it, and you are able to estimate (at least roughly) how long each task is roughly going to take. The article Kat James wrote on “The perfect Data Science sprint” provides some great insights on what Scrum is great at, and is definitely worth reading.

Kanban, on the other hand has no concept of a Sprint, and instead work flows continuously, but limits are placed on how much work you can have in progress and in testing. This is to ensure you are constantly learning from your tests and continuously re-prioritise your backlog of things to do, so that you are always working on the most important thing next. Figure 2 below shows a really nice comparison of the two frameworks taken from the Atlassian website https://www.atlassian.com/agile/kanban.

Figure 2: Differences between Scrum and Kanban from www.atlassian.com

From experience, I know that Scrum doesn’t work when you are trying to develop something new, as you have no idea how long it will take or even if it’s possible. However, Scrum is great at getting solutions deployed quickly when you know how to build them. Kanban, on the other hand, is great for research work. Therefore, the key is to pick the right framework for the right tasks.

It’s not, therefore, a question of Scrum vs. Kanban, but more a question of when to use Scrum and when to use Kanban (and of course when to use something else!). Following some experimentation, at Royal Mail we found that when following the Lean Start-up approach, in general it’s best to use Scrum for building the MVP, Kanban for iteratively improving the model, Scrum for deployment and Kanban for support (see Figure 3 below).

Figure 3: The different Agile Frameworks used at Royal Mail for different phases of a project

However, not all projects are exactly the same – otherwise you are not doing Data Science! So, it is very important to use this only as a guide and not as a rule. Therefore, for example, if it makes more sense for a particular project to stay on the Scrum board during the iterative improvement phase, then it should. It is therefore, essential to have experienced Senior Data Scientists, or preferably an Agile Coach on the team to provide guidance on when to move projects between frameworks.

It is also important to note that the existing Agile frameworks and especially the tools (such as JIRA, which we use at Royal Mail) were built for Software Development. Therefore, you need to adapt how you use the tools when you use them for Data Science, especially if you want to remain Lean. So, for example, you might only want to use the 2 or 3 fields in each JIRA ticket that are most relevant to your adaptation of Agile, instead of wasting time trying to fill in all the fields for no additional return on effort. The fields you use can also vary by project. You might also prefer to skip the story-point estimation process altogether, like we have at Royal Mail, if it doesn’t add value. And do be creative in being Agile! For example, some of our project teams run their daily stand-ups via a dedicated Slack-channel which works very effectively.

The Hypothesis Driven Approach

The Innovation Accounting Framework alone helps you get up to the MVP stage (i.e. step 4 in Figure 1) in the Leanest possible way. However, in the final iterative improvement stage (i.e. step 5 in Figure 1) you will need some extra help to remain Lean. From my experience, I recommend using the Hypothesis Driven Approach for this stage, which is defined as follows:

  1. Break the problem down into components
  2. Risk assess all of the components
  3. Start tackling the riskiest part of the problem first
  4. Generate and prioritize a few hypotheses about what might solve that part of the problem
  5. Carry out experiments to test the hypotheses
  6. Update/reprioritize your hypotheses based on what you learn from each experiment

In the first step, you have the freedom to break the problem down to its components at different granularity levels. You will therefore need to explore a few options and identify what works best for you and the particular types of problems you are working on. The risk assessment that comes next, is the most important step of the entire process, as starting with the riskiest aspect of the problem helps you to Fail Fast. For example, if the riskiest aspect is securing the required data, you must first chase and secure the data before doing anything else (e.g. before waste any time building models). If it turns out that the data is not available, you have then not wasted any effort and can then kill the project immediately and Fail Fast keeping all your stakeholders happy.

The way to approach the iterative improvement phase using Hypothesis Driven Approach is to think of it as though you are trying your very best to kill the project as fast as possible. So just as in Statistical Hypothesis Testing, you should always be trying to prove the null hypothesis – i.e. the next riskiest part of the problem is not solvable and so we should kill the project immediately. This mentality and approach will keep you Lean and help you Fail Fast. And of course, if you still manage to make it to the Definition of Done, you will have a really good successful solution to deploy.

While anyone can think-up a great hypothesis, in order to remain as Lean as possible, in step 4 it is very important to restrict yourself to data-based hypotheses only. You will find this to be one of the toughest behaviour change aspects of the Lean Start-up approach. As curious scientists, we can think up many hypotheses, but chasing hypotheses that your data doesn’t support will only waste time and effort. So, at the start you might have to force yourself to stick to data-based hypotheses only. For example, at the start of our Lean Start-up journey, I didn’t allow anyone in my team to work on a hypothesis if they couldn’t show me a visual (e.g. graph) of the data that supported the hypothesis they were working on. This soon became a good habit and we no longer have our white board of graphs pasted on it!

The way to ensure you stick to data-based hypotheses is to split your test data based on the current best model outputs, and explore the data to identify what the difference is between the test examples the model did well on and the rest. This could for example highlight features that the model might be missing (e.g. all the good examples are on a week day and all the wrong ones are on the weekend, so day of the week could be an important feature). With this approach, in subsequent iterations you will need to explore multiple dimensions of the data at the same time, and hence the ability to deep-dive and explore the data, especially in a visual way, is an essential skill for this step.

Finally, when you are reprioritising in step number 6, it is important to also re-run your risk assessment. This is to ensure that you are always working on the next riskiest part of the problem, based also on what you have learnt from your experiments so far.

Running Your Data Science Team as a Lean Start-up

The Lean Start-up approach and methodology is not only applicable to the technical aspects of Data Science projects. At Royal Mail, I also run the Data Science team as a Lean Start-up in every sense. In practice, this means I recruit a talented and diverse team, share my vision with them and then let them get on with making it a reality. As Eric describes in his book, you should only introduce processes as a response to something that isn’t working, and so at the start we had no processes on how we should be doing things, and even now we have very few. The team retrospectives are key to continuous improvement and identifying things that are not working and require a process to be introduced as a fix. I pull my entire team together, just for an hour every fortnight and we discuss what went well, what didn’t go well and what we can do differently going forward as a team. We have an open and honest discussion, celebrate our successes and agree three actions we will focus on as a team to make us an even better team over the next fortnight.

Running a team like this requires absolute trust in my team. It means empowering everyone in my team to challenge everyone (including me!) and everything, if they have an alternative idea or they don’t understand why we are doing something in a particular way. To encourage this, I operate a “no-blame” culture, where I take responsibility for anything that goes wrong, and we use the “five whys” approach described in Eric’s book to get to the root cause of any issues and fix these. This empowers everyone to speak-up and provide feedback at the earliest opportunity so that we fix most issues before they actually become an issue.

Considering your team as a Lean Start-up that you have invested in also means that you must be very careful about how you spend the resources available to you. This of course means cutting out all waste, such as (and especially!) minimizing the time spent in meetings. To this end, I empower my team to refuse to attend any meeting that is not adding value to their work and find alternative ways to provide updates to stakeholders (e.g. via a quick e-mail, or via Slack). However, what is even more important (especially for a Start-up) is investing in strategic growth areas, such as dedicated training time for your Data Scientists (even though the return on that investment comes later on), and ensuring they make use of it. You should also invest in providing your Data Scientists with the equipment they need, ensure they have access to the data they need, etc. so that you maximise their efforts. It is important to remember that investing in your people is by far the most important thing you have to do as a leader.

Running a successful Lean Start-up also means, experimenting with everything and iteratively learning and adapting all the time. This way-of-working is what has helped us continuously adapt and improve the Lean Start-up approach to our technical Data Science work at Royal Mail. This approach also helps us continuously challenge the status-quo in even the most traditional aspects of building and running a team such as recruitment. For example, we ran a recruitment charity Hackathon and effectively processed 40 candidates, and hired 7 Data Scientists in one day.

Running my Data Science team at Royal Mail as a Lean Start-up this way has also really helped embed the right culture both within my team and our stakeholders. Everyone then understands that you work in a different way, and so they helpfully engage with you in a different way. Together you and your stakeholders can then transform your business, and therefore, it is something I would wholeheartedly recommend.

Summary and Conclusions

Data Science and Start-ups have a lot in common, and hence the Lean Start-up approach can also help you maximise your return on investment in Data Science. Using the Innovation Accounting framework will help you get from a new idea to a successful Data Science solution in the shortest possible time using the least amount of resources. The Innovation Metric will help you to both track your progress and keep your stakeholders updated. After quickly building an MVP, you should use the Hypothesis Driven Approach to try your very best to kill the project as fast as possible, by always working on the next most riskiest part of the problem. This mentality and approach will keep you Lean and help you Fail Fast. You must also train yourself to only chase data-based hypotheses and be Agile about being Agile. Finally, running your Data Science team as a Lean Start-up in every sense will also really help you to embed the right culture both within your team and your stakeholders.

I hope you found this practical guide to applying the Lean Start-up approach to Data Science useful. And I hope it inspires you to approach Data Science the Lean Start-up way, so that by delivering significantly more return on effort, together we can then transform the hype surrounding Data Science into reality.

Ben Dias will be at The AI & Big Data Expo Global 2019 speaking on the Big Data Business Solutions track. Please note this is a paid track for Gold & Ultimate passes only, you can purchase your ticket here.