Agnostic, HR and Administration, Hyperautomation, RPA
Business user acceptance of robotic process automation (RPA) can be tricky to achieve.
But as Roboyo automation consulting expert Christopher Gibbons explains, users want a sense of control over the processes they’re responsible for.
People want control over their workload and workflows. It may seem like an obvious statement – but it’s one that’s often forgotten when it comes to implementing automation and software robotics, and that leads to issues with business user acceptance.Overlooking the human desire for control can have unfortunate consequences, not so much for the automation project’s performance, but for your team’s ability and willingness to make use of automation in the first place.
Take the following (painfully real) example: a finance process expert teams up with HR to automate payroll journal processing for a large company. We’re talking 50–60,000 employees, which means payroll postings amounting to several billion dollars a year.
The experts get together and devise a creative solution whereby a pool of robots will:
Brilliant. The solution comes together nicely and is tested by the experts, who attest to the proper execution of the intended scenarios. A scaled go-live plan is set in motion, with a dozen or so entities included in the first monthly closing run.
“Turn them off, please.”
Those are in essence the words used by the regional teams in charge of payroll after a few hours of operations by the robots.
“Why? What are they doing incorrectly?” ask the experts.
“We don’t know.”
It’s not that the automation was performing particularly poorly: the issue was that the regional teams did not have enough insight into the outcomes produced by the robots to understand if things were in fact going right or wrong.
In other words, not enough time and effort had been spent on creating the business-appropriate reporting for the teams to feel comfortable, or to feel in control of a highly sensitive and narrowly time-bound activity.
The regional teams preferred to do it manually because they knew where they stood – in contrast to the automated process which, while in principle well designed, was seen as an inscrutable black box.
When asked about how they design and implement operational reporting for their robots, many automation teams say they rely on the out-of-the-box reporting capabilities of their automation platforms to provide insight into process performance.
The problem with this approach is that it is roughly equivalent, in human terms, to asking HR if you’re doing a good job on a specific task.
Now, HR have records of your attendance, so they know if you show up for work – and they know if people are reasonably happy with your performance in general terms. But would they know if you’re nailing your sourcing negotiations or if you’re producing high-quality market intelligence reports? It’s probably safe to say they wouldn’t. The person who would know best how you are performing on different tasks is your manager, because they know the specifics.
Out-of-the-box reporting from your RPA orchestrator or control room does, of course, provide crucial information about the performance of your automations. For example, are the robots up and running? Any service failures? Any robots failing to show up for work, if we use the HR analogy from above?
This reporting is useful for the automation teams themselves, as they are in charge of monitoring the overarching automation infrastructure and service delivery – but it is hardly suited for business users who need specific insight into process outcomes.
What they need to know is: was this batch of account reconciliations created, and if so, are there any accounts that I should be worried about?
When defining the criteria for the reports that will have to be designed and built (more on that later) for your business users, the following guidelines may come in handy:
Your users want to know if the 20 account reconciliation files they asked for have been created or not. The fact that the underlying robots ran or not, while related, does not directly give them the information they need to feel in control.
Similarly, while it may be easy to structure your reporting along the lines of how your solution design has been created (one report for the dispatcher, one for each of the performers), this will likely cause confusion in your end users for whom those notions hold little meaning.
If you do this, you are condemning them to open multiple reports in order to piece back together the full picture.
Every transaction going into the process should be reported on, regardless of its outcome.
Most automation teams we interact with use notification emails for business or system exceptions, which is absolutely a best practice. But these notifications should complement your reporting, not replace it.
First, it is inconvenient to have to rifle through multiple emails (potentially received by multiple people) to understand how many transactions may have gone wrong (and by deduction, how many we assume to have gone correctly).
Second, not reporting on successful transactions from the robot perspective is a missed opportunity to provide extra insight into the outcome – which may still require action from the business.
Taking the account reconciliation example above, the robot’s mission may end when the subledger and general ledger extracts have been downloaded and their total values compared (transactional success) – but what if the values don’t match?
The business user will need to investigate of course, but if that variance is in the report produced by the robot, there is no need to open each and every reconciliation file to see which items require follow-up actions.
Finally, from an audit perspective, reports with a complete set of input/output information will save you many a discussion with internal control teams or auditors who want to ensure that no key control points have transactions that go unaccounted for.
In terms of reporting, you need to find a sweet spot where you are presenting neither too much information nor too little.
When discussing the need for specific reporting, a common argument against the whole idea is that log files are available with all the information about a script’s execution.The trouble with logs is that they violate the principle of relevance: useful information is lost in a sea of data points that may only be relevant in case of a specific exception or error – and even then, if the logs have not been customized to any degree, they may not be comprehensible by anyone unfamiliar with the inner workings of the orchestrator or the script.
On the flip side, offer too little information and your end users generally will not have enough to work with in case an issue arises.
Taking our example of account reconciliations, a useful set of information to extract for each transaction (account, entity, reconciliation period) would be a flag indicating if the reference subledger download was successful, a similar one for the general ledger download, and the value of the variance between the two balances.
Additional data points could be added in case one of the extraction flags indicates a failure, to help identify the probable root cause.
This should go without saying, but don’t forget to provide the right access for your end users.
This touches on the broader question of where you want to hold these reports in the first place. Will they simply be Excel reports created for each run, deposited in a shared folder?
While simple to implement, successive Excel reports typically don’t allow you to analyze the full use case history, to get a general idea of the success rate of your case over time; and they also limit your ability to create self-service reporting.
Should you choose a more advanced solution – e.g., Power BI, Kibana, UiPath Insights, Automation Anywhere Bot Insight analytics, a standard SQL database, or the Cockpit module of our own Roboyo Converge platform – be sure to anticipate licensing and access requirements as well as a minimum of training for end users. Otherwise, your meticulously designed report will remain woefully underused.
To sum up overall: there are no secrets to designing the reports that will give your end users a sense of control. Show some empathy, try to see things from their perspective, and you should land on your feet.
If reporting is vital, how do you plan to make sure it has its proper place in the development lifecycle? It’s worth remembering the following advice:
Reporting is a key deliverable of the design phase
As we have seen, the reporting attached to a specific use case is as fundamental to its success as any of the actions performed by the automation itself.
The right time to capture reporting requirements and design the output format is the design phase – and it should be part of the package your process owner signs off on before you move to the build phase.
If you fail to design your reporting specifications early, this will not only compromise quality, but it will mean the development team has to redo work – since the logical tests that need to be performed during process execution, to produce flags or KPIs, are inherently part of the script.
Your technical design also needs to specify how the information is output to its final destination (via queues, logs, or direct action by the robot, e. g. writing to Excel), so think about all of this up front.
Reporting is the cornerstone of RPA user acceptance testing (UAT)
Often user acceptance tests for robots are performed with no reporting in place. This is a bad idea, because reporting is one of the only ways for them to understand what automation is doing – aside from the transactional artifacts directly created by your scripts.
This isn’t the place to cover what makes a good RPA UAT, and what evidence is acceptable to back up the outcomes of the tests. But UAT is a breeze if you have built your reporting correctly. Run your scripts on the sample transactions you have selected, check that the artifacts or error messages you expect are in place and that the report reflects those outcomes, and you are done.
Reporting is as much an element to be tested as it is an enabler of testing overall. Yes, it’s work. But it’s well worth it.
Designing and implementing the right kind of operational reporting for your automations takes effort. This stuff does not come out of the box. Having said that, once you’ve built a few, refined your methodology, and decided on what your go-to reporting format/system should be, the associated effort will decrease – and barely make a dent in your use case delivery lead time.
On the contrary, the newfound ease at performing UAT should accelerate your rate of delivery.
In any case, even if it is work, the benefits in terms of change management alone should make this a no-brainer. Your end users’ level of satisfaction and acceptance of automation should increase significantly, which at the end of the day is what we all want.
Give them a window into the black box. Give them the means to reassert their control. Give the people what they want.
Take automation to the Next Level
At Roboyo, we’re hyperautomation experts – and can help you to enable business user acceptance of RPA. By doing this, we help you build confidence in automation – so you can start seeing immediate benefits for your enterprise.
Book a meeting with one of our experts and take your business to the Next Level. Now
Never miss an insight. Sign up now.