Guide to Data Assurance
How to Get Started
As a result, data quality assurance is becoming incredibly important. Companies need to figure out what they can do to ensure that they continue to collect accurate information to fuel a variety of critical business decisions.
In this section, we’ll look at five things that you can do to put yourself on the path to data assurance:
Data governance is a way to reduce risk. At its most basic, it’s a series of policies and procedures that dictates how organizations collect data and what they do with it to meet the growing demands of a variety of regulatory requirements. It’s also a critical function for ensuring that budgets are allocated correctly and that organizations are well placed to do more with their data and thrive as a result.
Ideally, responsibility for data governance should lie with your chief data officer. The reality, however, is that many organizations don’t yet have someone serving in that capacity. Instead responsibility for data either resides with IT or, increasingly, with marketing.
Whatever the case may be in your organization, it’s important to get executive level buy in for data governance so that it becomes part of your company’s DNA. That means determining who in the organization should take responsibility for your data — a decision that typically hinges on how people use it. Part of your pitch for getting the right person to take responsibility for data governance should focus on helping them not only understand its importance for risk management, but also as a means of creating business value.
Effective data collection requires strategic planning to work well. The idea is to focus in on collecting the data that matters most to your business and that aligns to your company’s strategic priorities. In other words, you don’t just want to collect data for the sake of it, you want to hone in on data that’s relevant, useful, and that has a purpose. That’s important to help you not only achieve focus, but also to prevent you from having to carry the responsibility and cost of collecting, protecting and preparing data that you don’t actually need.
Similarly, you want a plan so that you can ensure smooth implementation when you launch your analytics tool and so that you have a clear picture of how all of the data you’re collecting fits togethers. Likewise, it’s important for very practical considerations such as how your data is going to be named, formatted, and validated, which are critical when sharing data across different parties. In other words, it’s not just about what you’re collecting, but also how you collect it. Ultimately a good data collection strategy should help you to minimize the cost and effort required to ready your data for use.
Unfortunately, most organizations don’t take the time to do this. They may start with a plan, but it’s often siloed within one part of the business and not tied to the company’s overarching business strategies. As a result, such plans quickly go out of date and the companies that use them find themselves in trouble. Alternatively, they allow their strategy to become a static document, which simply doesn’t work. A good data collection strategy has to be a living document that multiple people can access. It should allow for effective change management so that it’s easy to track what changes are being made when.
Most organizations realize the value of data and typically feel compelled to collect as much of it as they can so that they can do more things with it. The problem is that there’s often too much blind faith placed in data — people tend to adopt a “set it and forget it” mentality — when in reality that data may very well be flawed. Unfortunately, many professionals make the mistake of making large budgetary decisions without actually giving their data the scrutiny that it requires. As a result, they wind up making bad decisions and misallocating their budget dollars.
For these and other reasons, it’s important to create a culture that recognizes the importance of data integrity and that supports the notion of quality assurance over blind faith. For many people this represents a significant shift in mindset. That’s not because of a lack of interest in the quality of their data, but rather because they thought it was too hard or that the cost of manual testing was too high.
To create a culture of quality assurance, companies need to start addressing any issues that arise with their data pre-emptively rather than after the fact. While that can be difficult to do manually on their own, with the right tools and support they can proactively validate their data and avoid a host of problems in the process.
Data collection is constantly at risk as a result of ongoing changes and human error. It’s important to be fully aware of what the risks are and what their implications might be for your business. For example, you need to know how much money your spending on your data-driven marketing efforts. After all, if your data is incomplete or inaccurate, that’s all money that’s potentially going out of the window. You should also factor in considerations around your brand and competitive position, both of which could be irreparably damaged if you discovered that you were accidentally sharing data that you shouldn’t be. Unfortunately, that last issue comes up all too often thanks to piggyback tags.
Calculate the cost
|X Time to detect error
|Misallocated marketing spend
|Poor user experience
|Lack of data
|Damage to brand reputation
|Extra time required for data cleansing
|Data loss / incomplete data collection
The final step is to evaluate your approach against our data assurance maturity model. This will help you understand how you stack up and identify whatever improvements may be necessary to bring your approach in line with industry best practice.
Consider the depth, frequency, and scope of your data assurance testing to identify whether your organisation is an optimist, a reactionary, a proactive or a leader in data assurance.
While optimists often have a positive outlook, in reality it’s not well founded. That’s because their testing is done manually, usually in association with a specific project, when their tags or the data they’re collecting is being updated. Unfortunately, not only is it a time-consuming process, errors can often be overlooked for long periods of time. Plus, taking this approach means that data gaps or flawed data can flow into their reporting, marketing, and optimization tools.
Although some detailed-oriented optimists may crawl their site from time to time to check tag coverage, most do not. Instead, they simply test once when they make changes, and hope for the best, with little if any regard to the many other factors that could impact their data. As a result, they’re the least mature when it comes to quality assurance.
Reactionaries are very similar to optimists. They’re caught up in manual testing that’s time consuming and only ensures that everything is working correctly at a single point in time. Unlike optimists, however, they’re prepared to do some more work when something goes wrong. Rather than testing, they are able to debug faults caused when updating site content or rolling out a new feature breaks the existing tracking.
Proactives invest in automated tools to constantly monitor the presence of their tags and the data they have collected. They combine this with manual testing before each deployment. This proactive approach ensures that most errors are caught prior to reaching production and that the time they have to spend finding any errors in their live data is mercifully short.
By far the most advanced, leaders represent best practice by going one step further. They regression test a library of detailed tests that check tags, data, data available and critical user journeys before every release of content, code, or tagging that may impact the website. Leaders want to ensure that only error-free data reaches their production tools and that everyone has the highest confidence in the data their businesses rely on.
What is Personally Identifiable Information?
Personally identifiable information (PII) is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context.
What is Piggyback Tagging?
Piggyback tagging is when the data you’re collecting about your customers gets shared with other businesses without your knowledge or consent. It often happens behind the scenes when third parties put a tag on your site as part of a campaign that you’re paying for, and then go on to share the data they collect with other third parties in their network. That in turn can lead to an array of issues, including a loss of public trust and the breaking of privacy laws.
Guide to Data Assurance
Why it’s Important to Assure the Quality of Your Data
- Why Quality Assurance Matters
- Why Data Collection Is Harder Than It Looks
- Mounting Challenges Call for New Approaches
How to Get Started
- Embrace Data Governance
- Develop and Document a Data Collection Strategy
- Foster a Culture Quality Assurance
- Assess Your Risk Versus Return
- Assess Your Existing Approach
Complying with the General Data Protection Regulation
- What is the GDPR and why does it matter?
- You’ll need much tighter control of your data
- Guide to Data Assurance: Complying with the General Data Protection Regulation