Bad Data is Kryptonite to your Integrations

You probably already know about bad data if you’ve done any data projects lately. A simple example is demographic information such as addresses, phone numbers, or email addresses that are not formatted correctly or have odd characters.   We often get questions when testing an integration on why data isn’t returned and an error is received instead. That is often the best case and desired outcome for integration testing when there is bad data. Some standards and checks ensure that data conforms to the desired type.  Integrations with 3rd parties that encounter bad data can show data to members or consumers that are unintended, create broken underwriting or automated processes, fail to send alerts or mailings, or completely block access to critical services for members or consumers.

So, what do you do? If things are working, you probably don’t know about your data quality issues. You may even have systems or processes continuing to expand the problems. You can proactively use data tools to validate that all your data conforms to the expected data types. Many FIs don’t prioritize that type of project until they get into a significant data initiative.  

Short of that larger project, you can build a data quality check into your processes for new solutions and integrations. Understand your data model and requirements, or work with someone who can help. Make sure new solutions and processes don’t introduce new data issues. Use the data fields or use cases for your project to validate the key data in your systems as part of your planning and preparation for the new solution. The data cleanup is worth it. Data cleaning isn’t fun but necessary and much better to address before issues and errors arise.