Has the recent trend of no-code or low-code automation solutions given the community the wrong impression that automation is easy?
Having been dazzled by the vendor demos of clicking on a few boxes, drawing a flow chart and within seconds being able to connect to Microsoft Outlook, manipulate an Excel spreadsheet and do a currency conversion surely real-world automation is simple. Right?
Running a Centre of Excellence across a large number of healthcare organisations allows us to see automations which on the outside are working fine but look under the covers and there is a Frankenstein’s monster waiting to unleash havoc on your enterprise automation plans.
This ‘anyone can do it’ culture is misleading which in the longer term will create technical debt and waste both bot and human time.
Let me explain….
Creating a local attended automation for personal use is no big deal. It is your own creation that you will need to manage and scalability is limited.
But if your ambition is to build and scale enterprise level automations then you need to consider defining a standard approach to all aspects of the automation lifecycle. Many vendors have published best practice approaches or models which are helpful but take this guidance and make it relevant to your own strategic aspirations and your automation plans.
Whether you are a sole developer or part of a bigger team, creating processes in a consistent way will lessen the support burden but also allow processes to be managed by multiple developers. More importantly, designing code in modular form will allow it to be used across multiple processes and mean less effort when handling application updates and system changes.
Consider structuring processes using pages, passing data items as input and outputs between pages and in and out of objects, consistent naming conventions, appropriate colour coding and annotate the code to ease understanding especially where complex logic is used.
Develop for Multibots
A common error we see are automations written to run on a specific digital worker. The ability to run a process using multiple bots is critical not only to add additional capacity at peak times but for business continuity allowing the switching of target bot in the event of system error.
Too many times we see essential items hard coded into the process such as file paths and names, username and passwords (a big no no!) plus the applications required for the process to run are not installed on all bots.
Use environment variables and the credentials vault to make processes safe and secure.
The best process will not fail at the first sign of trouble but will attempt to recover the process in the most efficient way.
Think carefully about how your exception handling will work. How can you categorise exceptions so they are triaged by the most appropriately person?
The most frustrating part of trying to diagnose a fault is when the audit log contains a non-descriptive error code. A great automation will pinpoint a developer to the exact place the fault occurred explaining the reason why.
So many times we see automations running many thousands of transactions that do not use bot queues. Not only does it mean multi bots cannot be used but the moment the process fails we have absolutely no idea which transactions have been processed and those that are remaining.
Using queues allows us to manage individual transactions, to see the status of each and to retry any transactions that may have failed.
Producing management and exception reports is much easier with a queue.
In this blog I have only touched on a couple of areas that should be considered when building manageable and scalable automations. I hope you can see that automation is easy to do badly but to do it in a way that lessens the burden, offers scalable and robust processes takes a little more thought.
If anyone reading this needs any help or guidance please free feel to reach out.