Enhancing Document Flow in Finance with Information Extraction

The increasing adoption of technology in the financial world is radically changing consumer demands and digital customer expectations of modern banking. Consumers want to interact with financial institutions on their terms, at a time and in an environment that suits them.
Streamlining processes built upon document flows poses a challenge for the finance industry. However, solutions like automated information extraction can help to achieve this.
Before information extraction (IE) emerged as a widely available technology, retrieving data from documents had had a long way ahead. Document management in banks and financial institutions has evolved dramatically over the past few years. Starting with electronic document management in the 1970s, the real boom came in the 1980s when scanned documents became mainstream.
These efforts paid tangible dividends like reducing storage costs and backing up documents more easily as companies used them to tame large volumes of information. They have also enabled much more efficient document retrieval, making it easier for employees to do their jobs, thanks to more advanced algorithms and data science.
However, introduced approaches improved the efficiency of companies only to a certain extent. Organizations were typically digitizing documents at the end of their lifecycle to extract relevant information for reference purposes. While this saved employees time spent reviewing paper archives, they still had to organize documents while performing ongoing tasks.
Working with a huge amount of text data is always a complex, time-consuming task and comes with high cost. Therefore, many modern finance companies rely on information extraction techniques to automate manual work as much as possible using intelligent algorithms. Extracting information can reduce the need for a big human workforce, reduce expenses, and make the process less error-prone and more efficient.
Below to examine how these processes can be automated with information extraction algorithms on unstructured data using OCR, deep learning and NLP techniques. Use cases and their challenges will also be discussed.
What is document information extraction?
Unfortunately, many financial institutions like commercial or investment banks, broker agencies, or insurance companies still base their processes on traditional paper forms. Those processes are labor intensive and prone to errors especially in the era of remote work.
Due to the increased number of users and their operations, banks have seen a significant increase in bank fraud in recent years. Outdated operational models based on obsolete forms of document processing like manual form editing can’t keep up with the increasing number of documents in flows and ensure a high level of security.
As a result, financial corporations are forced to face digital transformation and gain a competitive advantage by adopting intelligent document processing (IDP) in all its forms.
One of the biggest challenges is transforming large amounts of unstructured data into structured information, while maintaining high reliability and accuracy. Document extraction is one way to achieve this result. It can be integrated into existing processes in banking in various forms, as data is typically available in unstructured formats such as text, images, audio, video, text and images - requiring a large amount of work to process. So what hides behind this technical term?
Information extraction from documents is the process of linguistic analysis based on pulling out information from unstructured forms like textual sources such as entities, text classification.
Semantically well-defined data obtained with enhanced information extraction links these entities with their semantic descriptions and relationships from the knowledge graph. This technology solves many problems related to enterprise content management and knowledge discovery.
Intelligent document extraction can read, understand and scan both digital and physical documents, allowing employees to focus on more strategic tasks and respond faster to customer needs. Using the latest advances in deep learning, it is possible for a neural network to simultaneously learn layout information, visual features and text semantics.
By transforming extracted information into structured data for further analysis or action, intelligent document processing can play a key role in gaining a competitive advantage.
Information extraction vs information retrieval - what are the differences?
Information retrieval involves returning information that is relevant to a specific domain or query. The most important units of recognition for information retrieval are the initial set of documents or information and the query that specifies what we are looking for.
In contrast, information extraction is more concerned with extracting general knowledge or hidden relationships from a set of documents or information. Note that the content of an entire document can be considered as a source from which knowledge can be extracted.
It is possible to specify in some way what we want to extract, but this is more about properties and relationships than specific topics. Properties are more domain specific, while relationships in general cover more general scenarios.
To wrap it up, information retrieval returns a set of relevant documents, while information extraction returns facts from documents in a structured manner. The goal of the first one is to find documents that are relevant to the user’s information need (i.e. in financial investigation or review of legal documents), whereas the second one aims to extract pre-specified features from text data in documents or display information.
Types of data extraction
As a very broad area of natural language processing, data extraction offers several techniques focused on obtaining specific information types.
Named entity recognition
Named entity recognition (NER) - is the task of identifying and categorizing key information entities in a text. An entity can be any word or a series of words that refer to the same thing. Detected text entities are classified into one of the predetermined categories. Examples of categories can be company names, the names of people, place names and dates.
Terminology extraction
Another type of data extraction from text is terminology extraction, which is an automated method of analyzing text to identify phrases (relevant terms) that meet predefined criteria. Terminology extraction has its applications in text analysis where it is used for topic modeling, data mining and information retrieval from unstructured documents.
In each language, a term may have a different structure. In most cases, a term is a noun phrase. In English, a term can consist of nouns, adjectives, and prepositions.
One linguistic tool for term extraction is part-of-speech tagging. The text must first be part-of-speech tagged so that each phrase in the text can be matched with the allowed term structures. Word sequences containing unwanted parts of speech can be easily excluded to obtain a clean list of appropriate terms.
Another important preliminary step is lemmatization. This will ensure that the frequency of a phrase is calculated correctly, even if the phrase is used in a different form. This is important especially for languages where the noun and other parts of speech may have different endings and variations of word forms.
Sentiment analysis
Sentiment analysis is another method to perform information extraction from unstructured text. We can define it as a computational study of opinions, thoughts, judgments, ratings, interests, views, emotions, subjectivity, along with others that are expressed in text.
Using machine learning and natural language processing methods, IE systems can extract information from a document and try to classify it according to its sentiment, i.e. as positive, neutral or negative. This makes sentiment analysis instrumental in determining the overall opinion of a particular target, such as the banking product being sold.
There are several types of sentiment analysis:
- Aspect based sentiment analysis
- Grading sentiment analysis (positive, negative, neutral)
- Multilingual sentiment analysis
- Detection of emotions.
Nowadays, sentiment analysis is applicable in many situations, to name a few examples:
- Is the email from this customer satisfactory or unsatisfactory?
- How do customers respond to specific messages?
- The consumer uses this information to research products and services before making a purchase.
Main use of document information extraction
The past few years have shown major advances in computing, including GPUs and cloud infrastructure, that have made the long-anticipated capabilities of artificial intelligence increasingly common in everyday use. Basic functions such as optical character recognition (OCR) have been used for some time, but deep machine learning has taken these capabilities to a whole new level.
It has increased operational capabilities for document processing and management. It also minimized human effort in a massive way by introducing automation of the analysis of transactional documents.
This allows companies to process huge numbers of documents and create datasets based on that.
Artificial intelligence algorithms can then sift through in search of patterns that have a real impact on improving business processes.
As AI-based analytics uncover actionable insights, companies can use them to adjust their workflows and optimize models that govern everything from pricing to risk.
Applying AI to document processing unlocks the value of data scattered across a company's many internal systems, allowing new information to be mined in aggregate form. In addition, it gives companies the opportunity to streamline existing services and potentially create entirely new service elements.
New document management tools are being used in a number of innovative ways, including retrospective analytics, in which customers mine data from historical documents and use it to identify patterns, training appropriate machine learning models to do so.
Examples of information extraction
One example of information extraction is concept and advanced entity search. Modern algorithms allow documents to be searched using more contextual natural language phrases, as opposed to searching for specific keywords.
For example, an employee might type "customer had trouble completing a loan application" into a search application, and the software might present a list of logs about customers who were in the situation described by the above phrase. Such functionality is useful for finding more information related to terms that may appear in various documents.
Another example would be searching for documents related to LIBOR. In banking, as of 2021, LIBOR is no longer in effect, so compliance departments may be interested in contracts that reference this index for updates. By using AI-based information extraction tools, finding LIBOR-related documents and the resulting update can be much simpler.
Traditional solutions based on keyword searches may find many documents in the database containing, for example, discussions related to LIBOR, which is the information sought. Employees then have to read and analyze many more documents to extract the right ones.
In banks, regulatory documentation is very important but can be long and cumbersome, and searching for information can be time-consuming. This requires analysts and legal teams to spend hours finding individual pieces of information that could be quickly checked by an AI system.
Missing critical information in regulatory documents can have significant consequences. Such documentation includes materials such as accounting records, contracts, compliance manuals, legal doctrines, government documents, and more. It is possible to create unique models for each of these regulatory frameworks, which can then be used to find critical information.
Document enrichment and classification is yet another example of information extraction.
To make it easier for employees to find the right type of documents using filters, documents must be tagged with metadata that describes the data contained in those documents.
Traditionally, these documents are manually tagged with metadata. Employees forget to tag documents or tag them incorrectly, making them difficult to find when needed.
Artificial intelligence can streamline this process, but the process itself is supervised and requires defining what type of metadata to tag documents with. For example, in customer service, call center logs can be tagged with metadata about the type of problem the customer is having and also the emotional state of the caller.
Additionally an application based on artificial intelligence algorithms could also review older documents and automatically add metadata to them.
In classification tasks a machine learning algorithm could group metadata into broader categories, allowing documents to be automatically organized, which would result in documents being more easily searched for using keywords.
Don’t get left behind modern banks
Banks reap numerous benefits from implementing intelligent automation for their document processing needs - scalability, reliability, cost effectiveness, standardization and speed. The list of benefits is ever growing, as there are also countless other benefits specific to individual departments within banks.
For example, the right deep learning-based automation solution that can process complex documents is immune to events like the Covid-19 pandemic, where banks struggled to process documents as a result of office closures.
For many reasons, the most game-changing benefit of document processing with machine learning is ultimately the reduction of risk on many levels. Automation based on deep learning can reduce fraud risk, compliance risk based on errors found in documents, and document management risk. And information extraction itself streamlines quick obtaining necessary data in a structured format that is easy to process further.
It is important to choose the right solution that will seamlessly integrate into the bank's workflow and provide benefits in all relevant areas. A team of machine learning experts helps to achieve this goal.