real-time vs batch

Financial Systems Integration: Real-time vs. Batch

Integrations: Real-time vs. Batch

When an FI talks with a fintech or software vendor, the question is often: Do you work with my core or systems? That is the start of the questions, not the end. It would be best if you kept asking questions. Here is one: Is the integration Real-time or Batch?

Batch Integrations: These are point-in-time extracts of data that moved from one system to another system. They often focus on large quantities of data. The data available in the software or fintech providers application is static based on the batch’s point in time of the extract.

Advantage(s): Large quantities of data can be moved with one or a series of files. Processes of data can happen when systems are less busy to reduce the impact on other vital transactions or operations. 

Disadvantage(s): These are often one-way data movements. The data can become outdated and old quickly. You have historical information but no access to current balances or member/consumer data.

Example(s): Data Warehouse, Positive balance files for debit/atm used in off-line situations, MCIF or marketing systems.

Real-time Integrations: These are active integrations, often through API (Application Programming Interface), that read and write data in real-time processing. Data is current and consistent between systems.

Advantage(s): Consumer/member data is available and live. Data changes can be written between systems to update data fields, process transactions, disperse loans, etc. 

Disadvantage(s): Large data movement can stress systems. Historically real-time integrations have been difficult and costly. This is changing with new approaches like Janusea’s platform and others working on integration frameworks and toolsets.

Example(s): Digital banking platforms, ATMs and Advanced Kiosks, Teller and Workflow solutions, call center applications that can do teller functions, etc

Both: Applications and business cases may need batch updates and real-time calls.

Example: CRM for banks and credit unions often do nightly batch pulls and real-time updates for consumer/member screens, workflow’s final writes of addresses and demographic changes, transfers, loan payments, etc. 

Hybrid: We define hybrid as real-time calls that kick off batch processing or data movement outside the API stream. This is particularly useful when systems’ API options are limited or need to be expanded. 

Example: Loan Application and Processing Systems are a good example. They may use real-item APIs for automating applications with pre-filled applications and underwriting systems, for adding a member/consumer to a core system, and for booking and dispersing a loan. But then use an API call to trigger the SFTP movement of all loan document PDFs outside of the API stream to a folder for e-document system processing. They may also use an API call to trigger a nightly reporting or loan queue process creating point-in-time information and reports sent to loan officers and executives. 

As consumer/member expectations have increased and fintech options to meet those needs have grown, real-time APIs have become more critical. For your credit union or bank, the best approach is to match the capabilities of the integration to your business case and needs.

Kryptonite

Bad Data is Kryptonite to your Integrations

You probably already know about bad data if you’ve done any data projects lately. A simple example is demographic information such as addresses, phone numbers, or email addresses that are not formatted correctly or have odd characters.   We often get questions when testing an integration on why data isn’t returned and an error is received instead. That is often the best case and desired outcome for integration testing when there is bad data. Some standards and checks ensure that data conforms to the desired type.  Integrations with 3rd parties that encounter bad data can show data to members or consumers that are unintended, create broken underwriting or automated processes, fail to send alerts or mailings, or completely block access to critical services for members or consumers.

So, what do you do? If things are working, you probably don’t know about your data quality issues. You may even have systems or processes continuing to expand the problems. You can proactively use data tools to validate that all your data conforms to the expected data types. Many FIs don’t prioritize that type of project until they get into a significant data initiative.  

Short of that larger project, you can build a data quality check into your processes for new solutions and integrations. Understand your data model and requirements, or work with someone who can help. Make sure new solutions and processes don’t introduce new data issues. Use the data fields or use cases for your project to validate the key data in your systems as part of your planning and preparation for the new solution. The data cleanup is worth it. Data cleaning isn’t fun but necessary and much better to address before issues and errors arise.