7 Rules For Brilliant IT Discovery Engagements

Share on:
Assessments and requirements gathering engagements are so passé. What most IT shops do today, instead, is 'Discovery' engagements. This is how to make them work.

Table of Contents


Do you have a Napoleon Columbus complex? Then you will feel right at home here.

But wait a second, what is a so-called ‘Discovery’ engagement? Is there any difference between old fashion requirements gathering engagements and Discovery ones? Actually, yes. Discovery engagements are perceived by the customer as less committal and more open ended. ‘Discovery’ also confers a feeling of excitement, new beginnings, and even pleasant surprises. This is why no one sells ‘Diagnosis’ engagements, unless the sellers are an anti-virus company.

The good vibes, though, are just for the customer. What about the suppliers? Well, for them, Discovery engagements are anything but cheerful open ended exercises. Not only must they unearth thorough requirements, but often they are expected to provide sufficient content to flesh out complete proposals or statements of work. This is why knowing the presented seven secrets is fundamental!


1. One Discovery Engagement is Not Always Enough. Expect a Second Round.

It is hard to determine the exact amount of ‘Discovery’ that is necessary to ‘discover enough’, or something at all, in some extreme circumstances. These engagements are usually rendered short (a handful of weeks) and affordable by the supplier, under the expectation that a large piece of work will come out of it.

An honest Discovery engagement should always leave the door open for a subsequent, more in-depth engagement as a ‘next steps’ outcome as opposed to leading directly to a ‘deal’.

2. The Golden Input-Output Ratio is 1:4. Don’t Just Plan For Social Activities.

If Discovery was about conducting ad-hoc meetings with apparently relevant stakeholders, and writing down notes (i.e., meeting minutes), then the job could be simply commissioned to someone with secretarial skills.

Inputs, unless processed, are just scribbles. What do I mean by ‘processing’? Inputs must be processed in the sense that:

  1. Inputs need to be confronted with previous acquired inputs, to look for intersections, incongruencies, and outright contradictions. If one stakeholder said ‘The direction of travel is private cloud’, and another, some time later claimed, ‘we are moving everything to AWS’, we have a problem. These two views need to be resolved. One is right, the other is wrong, or perhaps both are right, when considering different LOBs.
  2. Inputs need to be organised into a taxonomy or organising structure. This means that various subsets of inputs, gathered at different stages, may add up to the same sliver of knowledge. For example, in a meeting A, it was claimed that response time may not ever exceed 600ms. In meeting F, it was claimed that there are up to 1000 concurrent users during Christmas time. These nuggets don’t belong to neither meetings A nor F (although they can be traced to them) but, instead, they should be homed in the NFR or Performance architectural dimension.

Over many years, I have concluded that the golden ratio between input gathering, and input processing is 1:4. This means that for every 2 hour meeting, 8 hours should be spent on ‘processing’. What happens during an intense, 3-day, full day workshop then? Does it create 6 days worth of input processing work? Yes.

In a nutshell, for every calendar ‘slot’ for social Discovery activities, book four ‘slots’, of the same size, for processing the inputs resulting from those activities.

Note: If the Discovery’s output also includes an elaboration on a tailor-made solution (based on the problem statement), then additional time is required.

3. The Value Lies in Framing the Problem. Don’t Drum Up Your ‘Solution’.

Many IT shops misunderstand the objective and nature of a Discovery engagement. They believe they are on a mission to fish for pain points. Their goal is often capturing frustrations and mapping them to their ‘solution’ so that ‘all stakeholders’ asks are addressed’. Actually, what results from engagements conducted in this manner is sheer disillusionment. Instead, the output from a thorough Discovery engagement should be split as follows:


This means that eighty percent of the effort should be spent on understanding the context and nature of the customer’s problem rather than collecting assorted claims pertaining frustrations and aspirations. The goal here is framing the problem through the lenses of the relevant dimensions that best characterise it.

For example, rather than just capturing a silly pain point and ‘ask’ such as ‘Customer complains that their application is slow. They want microservices’, it would be more helpful to frame the problem in terms of what language the application is written in, whether it runs on an application server, whether its actual bottleneck is its underlying relational database (or else), etc. The ‘ask’ may actually spell the wrong treatment. It is precisely the supplier’s job to diagnose the true nature of the problem, even if it leads to a different treatment that lies outside their competence.

4. Complexity is not Evenly Distributed. Don’t Mismatch Skills.

It is often impossible to determine a priori, what is the taxonomic distribution of complexity. No, that wasn’t some nerdy statement to be skimmed over. Let me explain. IT problems, unlike, say, civil engineering problems, are multi-dimensional, rather than three dimensional. On top of that, the number, shape, and incidence of each dimension varies and may not be predicted in advance.

IT architects, in their bag of tricks, have things like the 4+1 model, as a reference in terms of the minimum dimensions that an application is likely to exhibit. In this case, an application is expected to have a use case view, a logical view, a process view, a development view, and a physical view. By ‘taxonomic distribution’ what I meant is that we don’t know, say, whether we will find a thousand use cases, but only one server, or the other way around, a thousand servers but only one use case. Please note that not all IT projects are application-centric.

‘Of course no one can tell that in advance’ you may argue, but a ‘bad call’ as to how complexity is distributed, dimension-wise, will have dire consequences if the Discovery team’s skill set is mismatched.

Say that the Discovery team consists of a DevOps engineer, an AWS architect, and a Java developer and that it has been tasked with ‘discovering’ the nature of a “mission critical” application for a large accountancy, which is responsible for millions of dollars of the company’s revenue. The end goal is to reimplement said application in the cloud using microservices and containers.

It turns out that the application being discovered is ‘merely’ a headless command line tool that reads Excel spreadsheets dropped on a shared drive, to then produce tax-adjusted versions of the same spreadsheets on a different shared drive. The tool, however, understands a thousand different spreadsheet types and over 130 world tax regimes. In addition, the tool is closed source and the vendor behind it went bust years ago.

When the accountants start explaining to the engineers about tax bands, personal allowances, dividends tax and so on, it all goes over their heads. To add to injury, the sales guy is phoning every day to ask the engineers if the spec to ‘migrate’ the application to the cloud is ready, given that he is already drafting the statement of work! The engineers hardly produce any output other than just assorted notes based on what the accountants told them. Panic sets in. The Discovery is ‘going nowhere’.

What went wrong here? The complexity here lied, for the most part, in the use case dimension, for which the required elicitation skills had to be domain-specific. There was nothing much to discover here from an engineering perspective; instead, the focus should have been on understanding the domain semantics to rationalise those thousand spreadsheet types and those 130 world tax regimes. A more appropriate team composition for this project would’ve been an expert in domain-driven design (DDD), and perhaps a business analyst with a background in accountancy.

The bottom line is that you shouldn’t select the skills that you need for the team until you have a minimum understanding about how complexity is distributed in the context at hand. This may be accomplished through a survey or a ‘pre-Discovery’ session.

5. Be Aware of Dimension Bias. Don’t Fall for the Golden Hammer Fallacy.

On-boarding the team with the wrong skill set is usually down to what I call ‘dimension bias’. This is when the supplier comes with a strong dogma, as to what is the key dimension to frame the problem and provide a solution. This bias is a manifestation of the ‘Golden Hammer’ fallacy. Here below, we have three examples of dimension biases:

Example 1

It’s all about journeys. Let’s make the discovery about ‘personas’ and user experience.

Not always. I do have a soft heart for the ‘persona’ dimension, but where most IT peddlers get an ‘F’ from me is that they are normally looking for the retail persona. In a complex enterprise, only a tiny fraction of applications face the retail customer. In most cases, the ‘persona’ is a software developer or IT support. Yes, there’s backoffice personnel too, but many IT shops insist on this persona under the ‘digital transformation starts with the employee experience’ slogan.

Example 2

It’s all about the value stream. Let’s put together a value stream map!

Not always. Many IT shops have the delusion of pitching to the CEO, regardless of how small and remote from ‘power’ their project is. In reality, and especially in large enterprises, the individuals that conform the decision making unit (DMU) are rewarded for accomplising very specific goals. The way in which their goals contribute to the value stream is immaterial. This has been figured out by the powers that be. Don’t insist!

What about the value stream within the confines of the project’s business unit? That’s a much better start, but remember that even in this case, the DMU may want a concrete set of goals to be accomplished, rather than having a third party sniffing under their bonnet.

Example 3

It’s all about the API economy. Let’s focus on modernizing APIs.

Not always. Some applications have one or a few APIs and the value lies on the API’s implementation, not the API itself. Imagine a single API to which you feed a stock ticker symbol (e.g., ‘AMZN’) that returns a single number that represents the future value of said stock when the market closes the next day, with 80% accuracy. By definition, if such an application existed, its value would be incalculable, even though the API can be described as String -> Number, without Swagger or any other complications.

6. Be Ready to Digest the ‘Low-Level’ Bits First. Don’t Jump the Gun.

Everyone understands that the output resulting from a Discovery engagement consisting of a handful of weeks will be somewhat ‘high-level’, but maintaining a consistent abstraction level is difficult in two ways. First, most inputs vary in abstraction level, depending on the stakeholder’s knowledge, the quality and length of its documentation, and so on. Second, it is impossible to pin down the target abstraction level—and summarise effectively at that level—until sufficient inputs have been collected.

Say that you are an alien and that you’ve been tasked with discovering planet Earth. In your home planet, orbiting Proxima Centauri B, you have continents so you conclude: “Surely Earth must have continents too! I’ll just figure out what continents Earth has and call it a day!". However, you are beamed straight to Manhattan, between Fifth Avenue and 42nd Street, from your flying saucer. After walking all day you ‘discover’ Queens, Brooklyn, and the Bronx. You noted boroughs; not cities, not states, not countries—and of course, not continents.

Summarisation is the other issue. Say that your tentacle-bearing boss back home asks you to report at the ‘Continent’ level, but that they can’t beam you back to your ship for you to look at Earth from space; you are meant to figure out the continents by roaming across the Earth and then summarise. We certainly had world maps before satellites, but how many centuries worth of navigation and cartographic effort did they take?

For summarisation sometimes you can’t simply jump the gun too soon. You need to analyse most, if not all the samples at hand. Say that you are given 100,000 numbers to analyse and summarise but you want to reach a conclusion as soon as you see the first sequence of numbers:

First Attempt

2,3,.. Oh, it is an integer sequence starting from 2.

Second Attempt

2,3,5… No,sorry, there are gaps, oh, I got it, it is a sequence of prime numbers!

Third Attempt

2,3,5,9… No, sorry, it is a sequence of odd numbers except for 2

Fourth Attempt

2,3,5,9,10 … Sorry boss again, it seems to be a sequence of odd numbers and multiples of 2.

You get the idea: it is not possible to summarise, and thus abstract effectively, without observing a minimum quantity of samples. You can only do this effectively toward the end of the Discovery engagement.

7. ‘No Shows’ and ‘Unhelpfulness’ are Common. Have a Mitigation Strategy.

IT Discovery engagements are social affairs after all. If everything to be discovered was on Confluence or Teams, there wouldn’t be a justification for such a notion. In an average enterprise, it is impractical to hold all of their employees ransom to the wimps of the supplier so that they can happily and effectively perform their Discovery-wise goals. Stakeholders may be on holidays, high-priority meetings, and so on.

Even when stakeholders are indeed available, they may not have the kind of answers that are helpful to the intended line of inquiry, and not necessarily because of malice or fear. The masterminds, behind many IT systems, might have long left the organisation, and as long as the systems continue to function, there’s no reason as to why today’s personnel should know as much.

In essence, as a response to ‘no shows’ and ‘unhelpfulness’, the following two options aren’t available:

  1. Holding stakeholders at gunpoint to come to meetings or spit out useful information
  2. Moaning about how impossible it is to do business with the customer

Before the Discovery engagement takes place, it is helpful to include a clause on the statement of work (or similar pre-engagement document) along the lines of “the depth, or actual coverage of the objectives (…) will be subject to the availability of relevant stakeholders and their level of knowledge”. However, we know that this clause will be triggered anyway. Invoking it is of no help, because gaps hurt the supplier more than the customer. Instead, I propose the use of the following mitigation strategies.

For ‘no shows’:

  • Substitute. If the pope isn’t available, talk to the cardinals, if the cardinals aren’t available, talk to the archbishops, and so on.
  • Hypothesise. Fill the gaps based on your team’s gut feeling (once documentation resources have been exhausted) and encourage the customer to correct your assumptions. It is often the case that the customer doesn’t know what ‘right’ looks like, but it is easy for them to tell what ‘wrong’ looks like, instead.
  • Defer (only if everything else fails): State that the gap will be filled as a part of an ulterior discovery process, either on a stand-alone basis, or as a first step in an execution/delivery plan.

For ‘unhelpfulness’:

  • Be in the driver’s seat. A customer-driven walk-through is the way in which many Discovery engagements are often conducted. The advantage is obvious: no preparation is required. The problems arise much later into the process, when the supplier is expected to have mastered the topics at hand through such a process. The supplier, instead, must be more proactive and select the dimensions they want to use to frame the topic in advance.
  • Playback. Rather than ‘meeting minutes’, summarise your understanding about the discussed topic, filling gaps with hypotheses and letting both the SME (or other stakeholders) correct you, if you are wrong.
  • Manage language and expectations: Words like ‘deep dive’, ‘X-Ray analysis’, and so on should never be used to describe meetings because they signify the outcome, which often hinges on the stakeholders’ ability to provide sufficient clarity. When a meeting is being conducted for the first time, for the application, or domain at hand, call it ‘Introduction’. If you really want a ‘deep dive’, set the ground rules first, and plan for many follow-up meetings.


I am sure that regardless of whether you wear the supplier or customer hat, you will feel identified with many of the points I made here. Have I missed something important? Get in touch with me, and I’ll add your comments.