Leveraging Artificial Intelligence for Increasing Business Efficiency — II

Nucleus Software
5 min readApr 30, 2021

by Gaurav Marwaha and Ritika Dusad

About this series
In this series, we try to stay away from the hype and focus on how to apply Artificial Intelligence (AI) to specific business challenges and derive meaningful business benefits from this technology. As Andrew Ng, co-founder and former head of Google Brain and pioneer of online education through companies like Coursera and deeplearning.ai has recently mentioned in an
interview- focusing on being ‘AI first’ may not be the best approach . In fact, ‘‘in terms of how I execute the business, I tend to be customer-led or mission-led, almost never technology-led.” We concur with his point, and through this series of blogs, we try to point out different methods to be utilize the power of AI while being focused on solving business needs.

Photo by Firmbee.com on Unsplash

In the first part of this series, we talked about how corporate leaders may extract business value from Artificial Intelligence (AI). While there are multiple AI tools, techniques, and algorithms to choose from, we discussed what factors one should consider while selecting from this myriad of tools to harness the true potential of AI. In this present blog, we deliberate on identifying a business problem that can solve by using Artificial Intelligence.

We have previously introduced using AI to optimize workflow optimization for businesses. Any workflow describes a series of stages executed in today’s digitized world through a combination of man and machine. A critical task performed in this workflow is that of data entry. Many businesses set up centralized data entry centers where the quantum of work being performed is based on the pace of data entry. Optimizing workflow for such centers would involve an understanding of the typical efficiency of different users and programs and domain knowledge.

Before we solve the problem of optimization, we would like to emphasize the significance of the ‘data’ that is fed into regression analysis tools or AI algorithms to generate optimal models. It is crucial to understand the nature of this data to know which AI tool to use.

This issue is discussed very well in McKinsey’s ‘An Executive’s guide to AI’. A simple example would be — for a restaurant manager to predict optimal staffing for their restaurant, they would take their data of several customers of a restaurant for a time and feed it into AI algorithm of ‘Random Forests.’ Deciding upon the correct match of data and AI tool is critical in determining the accuracy of the predictive model generated. In the following few paragraphs, we discuss how to understand the attributes of data to make the correct choices.

For this article, we consider the use case of optimizing a loan application and processing workflow with AI. While a human user enters loan applicant details into the system, the data we are concerned with, say the pace of applicant detail entry, is gathered by machines. This fact indicates that the information is clean and does not contain anomalies.

Photo by Luke Chesser on Unsplash

Let us take a situation where we exclude human performance on purpose from the data, i.e., we assume all humans work at a pre-defined capacity. For example, during the loan application approval process, a third-party scoring service supposed to aggregate external credit scores to an internal scorecard model to generate a final scorecard abruptly slows down. Applicant cases start to pile up near one stage of the loan application workflow.

If this stage would take three seconds initially now gets delayed to seven seconds. The threads tied to processing this stage will be occupied for longer and ultimately lead to a cascading performance of the system at that stage.

In this case, the goal for AI would be to minimize the sluggishness of this third-party aggregation stage while a few hundred users are still working with that system.

From the situation mentioned above, we realize that it is essential to decide which entity is to be optimized to identify the AI problems. Once done, data attributes to be monitored need to be selected, optimized, and then finally measured. For clarity, we outline a sequence of steps below, with examples in italicized font.

  1. Define the optimization goal with measurable outcomes
    Minimize the impact of sluggishness at the third-party service interface stage
  2. Select data attributes which contribute to the goal
    Timestamp of initiation of third-party service request, timestamp of completion of third-party service request
  3. Implement procedures to measure and baseline performance of these attributes
    Basic programs that measure the throughput of third-party service request periodically and store it in a database
  4. Develop AI programs to monitor their current or dynamic values
  5. Develop AI programs to optimize for better efficiency intelligently
    Programs that decide a possible remedy when performance degrades below a certain level

Now that we have understood how to use AI to optimize a workflow stage that does not involve humans let us finally bring back man into the equation. Interestingly, the greater the number of humans involved in an operation we’re trying to optimize, the more is the complexity of measuring the efficiency of a task being carried out.

Why is that, you may ask. For one, the data attributes that need to be considered for generating the optimal model are many more when a human is involved. Secondly, we all know how unpredictable humans are. No, we are not making a case for robots to take over the world yet. One day, maybe! 😆

Again, coming back to optimizing a workflow stage involving humans. There is a task assigned to a user or a group of users, and we want to minimize the time difference between assignment and completion of a job. The data attributes needed to be considered for such optimization are listed below-

  1. Name of the assignee — human user
  2. Task creation timestamp
  3. Task completion status
  4. Task completion timestamp
  5. Priority of task
  6. Task due by the time
  7. Difference between the time it took to complete the task in this cycle and the baseline
  8. Type of task; let us limit to data entry, document collection, and credit approval for this blog

According to the sequence of steps outlined for the identification of problems that AI can solve, we have worked out two steps. We encourage the reader to go through the following steps and let us know in the comments section.

In machine learning terminology, the process of using domain knowledge to extract features from raw data, as shown above, is called ‘Feature Engineering’ . We encourage readers to come up with data attributes to optimize grocery ordering by a restaurant during the year.

As societal interest in AI grows, the spotlight rests on image, voice, and language recognition, which require a lot of training data. In the next part of the blog series, we will explore the road less often taken- AI-based models that may not necessarily require training data. Stay tuned!

--

--

Nucleus Software

Nucleus Software is a digital banking solutions provider to the global financial services industry.