User Requirements for Mission-Critical Software
User Requirements vs. Product Features
User Requirements vs. Product Features
We’re so used to window shopping - or I guess the modern equivalent is “swiping” - that we tend to do the same for mission-critical software and equipment. We look for features, brands, and deals rather than being explicit about what we actually require. Then we write the requirements list (if we write one at all)
based on the product we think we want
.
This can mean we end up getting something that doesn't quite fit our needs. An expensive problem indeed.
Here are a couple of examples I've seen repeated several times:
1. Production decides they need an e-signature system
for the records produced by your regulated facility. Corporate purchases an e-signing add-on to that big-name software that the management team uses every day. It's what all the fortune 500 companies use, so it's the safe bet.
Now you can create email-driven signing workflows to have people both inside and outside of the organisation sign PDF copies of documents, which are then conveniently hosted in the vendor's cloud system along with a verifiable audit trail of each signer. Very useful for contracts, SOPs and even some HR forms.
However the production department needs raw data to be signed
at the time of data entry
, on the production floor. In this system the records would need to be migrated, exported to pdf and then a signing workflow initiated and administered through email. So they went back to their old hybrid system.
They thought they needed an electronic document signing system, but what they actually needed was to be able to sign data entry records in real-time on the production floor.
2. A particular data management system looks perfect
for what you need based on the vendor's demos. Your team get to play with a sandboxed version and everyone agrees that it's a game changer - there are several laborious workflows that are going to be eliminated right away.
Once it's installed, the system doesn't look anything like the demo. It turns out it would take weeks of configuration (not to mention vendor consulting hours) to get the workflows and custom reports implemented that you thought you'd be getting out of the box.
In the end it never gets done, and the system gets used as a
very expensive
file system with ID-generator.
They thought they needed a LIMS system, but what they actually needed was to automate a few specific workflows.
So what went wrong? In both of these cases they didn't have a clear and complete picture of what they actually needed before going down the purchasing path. So they went for a system that
sounded like
it would do what they wanted, but didn't actually meet their requirements.
Often we think we know what we need by browsing what's available - reading the marketing materials and feature lists in front of us and saying
I want that!
. We’ve let our actual requirements be implicit to the buying decision, to be influenced or even supplanted by
features.
And what's worse is that those marketing materials and feature lists are
built to create a feeling of need
!
How do we avoid this kind of costly mistake?
We write some good, detailed User Requirements.
Then we share them around all the stakeholders, including QA, regulatory, IT, management and the production floor, to make sure we all understand what we need to get out of the proposed system.
By being explicit with your (team’s) needs, you get a list that you can reference while you’re browsing for a solution. It doesn't have to be immutable, but at least if you decide a feature you saw in a demo is absolutely necessary, you have to explicitly update the list.
Until next time, thanks for reading!
– Brendan
Start and end with the Stakeholders
Start and end with the Stakeholders
When we're designing a new system it's so tempting to rush forward to describing the solution in our heads - but that's a good way to end up with the wrong solution!
The process of building requirements should begin with the stakeholders.
One of the ways we ensure that all the stakeholders are included in requirements building is to start with a high-level communications document. In the regulated world we call this the User Requirements Specification - and its purpose is to bridge the users, design, QA, implementation, and validation/testing teams to make sure everyone is aligned on the basic intentions of the system before jumping into functional or detailed design considerations.
How high level should this document be? At this stage, you should have a specification that covers all the user, regulatory and day-to-day QA needs while being broad enough to be satisfied by multiple possible implementations.
As with any written requirements these should be specific, unambiguous, measurable, and verifiable. That way they can be easily translated into test cases and verification methods that can be run by each of those stakeholders at their interface.
For example, the finalized system gets tested at the highest level by the end-user pressing buttons or inputting data; IT querying logs; QA searching through audit trails; management reviewing summary reports; and so on.
Start with the stakeholders, then end with the stakeholders.
– Brendan
When do you put together user requirements?
When do you put together user requirements?
Pop quiz. Let’s say you’re going to order a new HPLC for the lab. When do you put together the requirements document?
1.
Once the instrument has been ordered?
2.
After the instrument has arrived but before it’s been provisioned?
3.
When the “HPLC validation project” has been approved and funded?
4.
Once you’ve identified the model and vendor that you want and are filling out procurement’s forms?
How did you choose the model? How do you even know that you need a new HPLC?
Obviously you’re identifying and using requirements very early in the process. Are you writing these down? Are you communicating them with other stakeholders at that time?
Ideally, we should be gathering, writing, and communicating user requirements
before we even propose the solution
. Why?
Because whether you’re developing regulated spreadsheets, buying new software or installing and validating a new software-controlled instrument, problems become much more expensive the further you get in the process. For example:
•
Finding out during validation that the instrument’s software doesn’t meet a regulatory requirement will cost you time and money to either buy and configure the required module or else develop some kind of mitigation.
•
Finding out in production or during a study that the instrument you bought doesn’t meet your analysis requirements might mean down time, lost data or product, and perhaps even mothballing that instrument altogether.
•
Finding out during an inspection that there was a critical problem with data from your instrument could mean study invalidation, recalls, licensing problems and hits to your brand’s reputation.
In contrast, discovering that a spectrophotometer will serve your particular needs better than an HPLC before you buy anything will only cost you the time it took to really think about and discuss your requirements.
If we can decide on a set of set-in-stone actual requirements that aren't tied to a particular solution, then we can use those requirements to test multiple proposed solutions before we even start the design or procurement process. Not only will this reduce the risk of committing to the wrong solution, it will open you up to discovering and exploring other possibilities that might be better, cheaper or easier.
Until next time, thanks for reading.
– Brendan
What makes a good user requirement?
What makes a good user requirement?
This week we’ve been talking about writing and sharing user requirements as early as possible, without assuming a solution, and without going into design or implementation details.
But what makes a good user requirement?
Let’s look at an example. I need a new cellphone. What are my requirements?
“My new phone must be fast, have a great camera, and better battery life than my old one.”
Fast at what? My “smartphone” from 10 years ago certainly felt fast when I first bought it, and the camera was certainly better than the flip-phone it replaced. Would that satisfy me now? At least we’ve got something to measure the battery life against - but it’s not very practical unless I actually measured my old phone’s battery life. And am I comparing idle time, or time watching videos?
So while “My new phone must be fast, have a great camera, and better battery life than my old one” feels like it means a lot to me, how will I actually communicate what this means to someone else, like my wife or a salesperson? How will I know if the phone I choose will meet the requirement?
Ok, so we need to be a bit more specific with our requirements. Let’s go the complete opposite direction, and say after I’ve done a little window shopping I come back with something like this:
“My new phone must have an A17 pro-class 6-core GPU with a 48MP main camera and a 3,279 mAh battery.”
Now we’re talking. We’ve got numbers. These are right from the product specs, and if we communicate these to the salesperson they’ll be able to point to an exact model! We’ll be able to positively confirm whether the phone we bought meets the spec. Great!
Were these actually my requirements though? Were they relevant? If you haven’t guessed already, I’ve just specified one of the most expensive phones on the market. Was that my intention writing these requirements? Would something less expensive have met my needs?
We’ve now got a bit of a feel for what makes a good requirement: They need to be clear, specific and unambiguous so that we can communicate them to others. At the User Requirements stage, you should be using non-technical language where possible, with the understanding that you will be communicating with stakeholders from various disciplines and backgrounds.
They also need to be measurable, verifiable and relevant - you need to be able to provide meaningful bounds to your requirement, and then test it against the options. On the other hand, there’s no point in throwing numbers on the page that don’t have a direct bearing on your actual needs.
Remember your goal is to specify your use case for a new phone, not design a new iPhone. You don't need to create a complete and technical checklist of all possible specifications.
With that in mind, let’s try these user requirements again:
1.
The new phone must be able to run the following apps without error: [app1, app2, app3…]
2.
The new phone must be able to play Ultra HD quality you-tube videos without stuttering.
3.
The new phone must be able to take clear, photos of at least 16 megapixels quality
4.
Take videos that are not blurry when I’m trying to record my toddler running around the park.
5.
Provide a 16 hour day of light use without running out of battery
6.
etc...
We've split the requirements into simple, individually testable chunks. We've kept them high level and relevant.
What about Requirement 1? Shouldn't I break these out into sub-requirements with an expectation for how my most important apps will behave on the phone? I would only do this if it lends value - for example I've got some reason to believe some phones have problems running a critical app. Otherwise, I would add to a list of assumptions later on, something like "If the app store says it's compatible with this phone, then it will run without error."
Finally, this is an iterative process - don’t be afraid to add more detail and assumptions as you gather information. In our example, Requirement 4 seems a little subjective, and might be a little difficult to measure while comparing phones in a store. Perhaps with a bit of research I could add a more objective technical specification for the camera and processor. Or maybe when coming up with a test later on, I could just read reviews for other people's impressions of the video quality.
Until next time, thanks for reading!
– Brendan
Subscribe to the Daily HaiQu!
Join me every weekday as we take a few minutes to explore, design, test and improve the critical systems we use in our regulated facilities. From spreadsheets and software to SOPs and forms and beyond.
We'll never share your email. Unsubscribe any time.