top of page
Writer's pictureChris Ballard

What can Shakespeare teach us about planning successful AI projects?

Updated: Feb 19

Planning successful AI projects in complex technical and business environments is hard as its full of uncertainty. Take a lesson from The Bard - don't fear the unknown and address the uncertainties common in AI projects.

Statue of Shakespeare

Every few months, I’m involved in “Big Room Planning” where all the Engineering teams get together to plan and decide on the priorities for the next quarter. My experience of planning AI projects is that they can be very experimental and full of uncertainty. This normally runs contrary to business and product expectations for which certainty is needed. The situation becomes more difficult where there are dependencies with other teams that need to be coordinated so they can plan their work accordingly. Providing business leadership, product management and other engineering leaders with definitive timescales and deliverables can be extremely challenging.


Be not afeard; the isle is full of noises

Olympic Bell at the London Stadium
Olympic Bell at the London Stadium

Outside the London Stadium, which hosted the 2012 Olympic Games - and our last Big Room Planning - is the Olympic Bell.


It’s inscribed with the words “Be not afeard; the isle is full of noises”, which is a quote from Shakespeare’s The Tempest Act III, Scene II. As soon as I read them, it struck a chord with me (no pun intended!).


In this scene, Caliban, who has spent his whole life on an island, tries to reassure new arrivals that the sounds on the island are nothing to fear.


Planning a successful AI project is an uncertain business. Lots of ways we could approach a problem; uncertainty about whether we can solve it and how.


We can often approach planning, especially in complex environments, with a lot of trepidation. We fear giving specific dates in case we’re hauled over the coals if we fail to reach them.


Finding the right path through all that noise and uncertainty can be a difficult and stressful experience as there is a lot of opportunity for things to go wrong.


Reducing uncertainty in AI projects

If we’re going to address uncertainty we need to know it comes from. We can then identify strategies to address it.


In my experience, from having worked on machine learning projects in situations ranging from startups to large multinational organisations, there are often a few common sources of uncertainty. Doubtless there are others. I’d love to hear your thoughts on any others that you have come across.


So, from my perspective, what are some of the most common sources of uncertainty in AI projects?


  1. What the problem is we’re solving

  2. Breaking the problem down

  3. The data we’re using

  4. Putting the cart before the horse

  5. Do the simplest things first

  6. Knowing when to stop

  7. Knowing what success looks like to the business


In this post, I’ll give a brief introduction to each of these sources of uncertainty, and why it can be a problem. In later posts, I’ll dig into each one a bit more to try and set out some strategies to overcome them.


1. The problem we’re solving

It's important to have a clear and unambiguous definition of the problem that we’re trying to solve. This needs to be understood from a non-technical business perspective. All stakeholders in the project need to have signed off on this problem statement.


Having the right understanding of the problem helps to ensure that you interpret the data in the right way. Uncertainties arise when it is difficult to frame the problem appropriately or when it has been incorrectly specified or understood. Equally it helps to ensure that the right technical solution is designed and researched to tackle that problem. A good definition of the problem will help you to navigate the technical choices you need to make down the line. It might be the deciding factor that tells you whether a complex deep learning model is required, or whether simple heuristics solve the problem well enough.


2. Breaking the problem down

The business problem will often need to be decomposed into simpler sub-problems. It may not be feasible to solve the problem in its entirety in one go. By breaking it down we start to recognise how best to think about the problem and identify ways to think about it that are simpler, more tangible and easier to solve. However, usually it will not be possible to break the problem down from the get go of the project. Some data exploration and research will be needed to figure out the best way to decompose the problem in the right way. Uncertainty arises when we do not know how best to break the problem down, and can spend a lot of time researching sub-problems that are too intractable or which require data we don’t have.


3. The data we’re using

How is the data generated? Are there any data generation processes (either automatic or manual) which affect how you need to interpret it and use it down the line? For example, are there any temporal relationships in the data - if certain data points are generated after the thing you are modelling, this may lead to information leakage. Always try to interpret the data from the perspective of the problem you’re solving and the processes which create it. Mistakes and uncertainty creep in when we blindly use data without giving due consideration to these factors. A common source of these problems is incorrectly specifying the dataset that will be used to evaluate our model. More on this in a later post.


4. Putting the cart before the horse

Far too often, I see business and technical teams become excited by a technology and fixate on it as the solution to their problem. If you start to talk about the technology before you have identified the problem and you fixate on technical solutions at the expense of understanding the business problem properly, we’re introducing significant risk of project failure. One previous manager I worked for used to refer to AI as “magic fairy dust”. I think this is a good way to think about it. Talking about AI is great for sales and sounds exciting and attractive to clients, but is it the right technical solution to our particular problem? Using Large Language Models might solve our problem, and are cool, but can we deploy them with the latency we need for our particular use case? Defining the problem and its constraints first is key here.


5. Do the simplest things first

After breaking down the problem, personally, I find that it's tempting to work on the most complex and interesting problems first. I’d recommend that you only do this if that problem is a critical step that needs to be solved, that without doing so you will be unable to deliver the project. Even then make sure that you try simple baselines before more complex approaches. Ideally, it is better to build an end to end solution to the problem where each step is simpler. It might not perform as well as more complex approaches, but you have a potentially deployable solution which could already make a difference to the business. Doing so will improve your understanding of the problem and reduce uncertainty.


6. Knowing when to stop

Carrying out AI research is fun and intellectually rewarding. However, at the bottom line the business is only interested in having a solution to their problem which makes a difference. That means we need to put it live. Having research results is important, but they won’t make a difference unless the business can use them. Therefore, it's important to know when to stop research, and focus on turning it into something the business can use. But how can you do this? This is where having the right metrics is important. You need clear unambiguous metrics and success criteria against which you evaluate every experiment you carry out. However, judging how long you need to reach your goals is difficult - will you need 2 weeks or 2 months to achieve the success criteria you have in place? I’ll dig into this more in a later post.


7. Knowing when you are successful

Related to the above point, by comparing yourself to your success criteria you can make a judgement on whether your solution is viable and whether the project can be considered to be successful. However, there is often a disconnect between the technical metrics that Data Scientists normally use to measure model performance - such as precision and recall - and the way that the business views success.


In one project I have worked on, the objective was to bring about cost savings through automating a complex data entry task using machine learning. It was possible to directly measure accuracy, but measuring the potential cost savings was much more difficult. Higher accuracy was clearly better, but what level of accuracy did we need to deliver a cost saving compared to the current system? How did we know when we could stop research and had reached the level at which we could deploy? Simply continuing research to try and get better and better accuracy was not viable as the higher cost of research reduced the return on investment of the project. There were lots of considerations such as time required by a person to identify and correct the errors made by the models and the impact of existing automation in the system that was already in place. 🤔


In the end, we used data collected about manual data entry times, to try and estimate the level of accuracy needed to "break even" on data entry cost. Giving us a baseline figure that we needed to achieve. Thinking about the problem from the perspective of the business goal, enabled us to design proxy metrics that we could use to measure success against.


How can we help you?

Do any of these problems resonate with you? If so, we’d love to hear from you! At Justified AI we love to help our clients tackle AI projects in the right way to help set them up for success. If you fancy a chat to see how we could help you, please drop us an email at hello@justified.ai.


27 views0 comments

Comments


bottom of page