When Data Warehouse Projects Fail

When Data Warehouse Projects Fail

Getting a data warehouse over the finish line is hard. Data warehouses are complex organisms, requiring intense collaboration and technical expertise across every level of the organization. When it all comes together, it’s a beautiful thing. However, many data warehouse initiatives never make it to user acceptance.

On my technical blog, I have cataloged some of the reasons I’ve found that data warehouses fail. Avoiding these pitfalls can reduce the possibility of a data warehouse project going off the tracks. But things can still go wrong even with the best planning and adherence to best practices.

When Data Warehouse Projects Fail

The odds are good that something will go wrong during every data warehouse implementation: the due date for a deliverable gets pushed out, a dimension table has to be refactored, the granularity of a fact table changes. If things slide to the point that the project is no longer moving forward, it is critical to respond properly to get the project moving positively again if possible.

Triage

First and foremost, focus on determining status and next steps. Is the project truly ended, or has it just stalled? That distinction will drive most of the rest of the decisions made. If there is a design or development impasse, the road forward will look very different if the project has been shelved due to budget cutbacks or other factors. Assess if there is working room to salvage the project. If yes, use that time to isolate and minimize the speed bumps that slowed down the project the first time.

When triaging, don’t be afraid to issue an “all-stop” directive while you reassess next steps. However, don’t let the project founder in that state for long. Figure out what went wrong, fix it, and move forward.

Take inventory

Regardless of whether or not the project is salvageable, take stock of the individual deliverables and the respective status of each. If the project has simply stalled but the plug has not been pulled, having a clearly identified status of the technical and nontechnical assets will make the process of restarting the project far easier. If the project is not salvageable, there is almost certainly some business value in the work that has already been completed. Properly classifying and archiving those assets can provide a jump start for related initiatives in the future.

Communicate, communicate, communicate

Whether it’s a stalled project or one that has been stopped entirely, timely communication is essential to managing expectations. Clearly communicate the status of the project, what to expect next, and any timelines. Make sure that everyone involved – business analysts, executives, technical staff, and other stakeholders – is clear on the status and timeline. Don’t cast blame here; keep the updates fact-based and simple.

Renew the focus as a business project

For stalled data warehouse projects, it is important to refresh the focus of the project. Data warehouses should always be driven by business teams, not technical teams. Because of the technical complexities required of data warehouse projects, it is common to lose focus and steer data warehouse projects as technical initiatives. Although technical architecture is critical, the business stakeholders should be the ones driving the design and deliverables.

Scale back on deliverables

Of the reasons I’ve found that data warehouse projects fail, trying to do too much in one iteration is a common factor. Big-bang data warehouse projects don’t leave much flexibility for design changes or refactoring after user acceptance. If a stalled or failed data warehouse has many concurrent development initiatives, consider cutting back on the initial set of deliverables and deploying in phases. This can add overall time to the schedule, but you get a better product.

Bring in a hired gun

Insourcing your data warehouse project is often the right solution: you aren’t spending money on external help, you don’t lose that project-specific knowledge when the project is done, and your team gains the experience of building out the technical and nontechnical assets of the project. However, if a data warehouse project has stalled, bringing in the right partner to get back on track can help to save the project, and save time and money in the long run.

Conclusion

Like any technical project, data warehouse initiatives can stall or even fail. If this happens, it is important to properly set the project back on track, or wind it down as gracefully as possible if the project has been abandoned.

Keeping Data Quality Simple

Keeping data quality simpleA universal truth in business is that bad data costs money, and can even be dangerous. Data quality processes are often an afterthought rather than a central component of the architecture, due in part to the fear over the complexity of checking and cleansing data quality. In many cases, that fear is warranted; the longer data quality is delayed (in time as well as in data lineage), the more time and money it costs to make it right.

Some data quality needs require a formal project and specialized tools to address. Other issues, can be mitigated or avoided entirely with small changes to the data model or load processes. We like to look for these “low-hanging fruit” opportunities when designing a data or ETL infrastructure, because some of these simple modifications can save dozens of hours (or more) and significantly improve data quality very quickly.

Keeping Data Quality Simple

Among the passive changes that can help prevent or fix data quality issues:

Use of proper data types. Many of the data quality issues we fix are related to numerical data stored as text, and then some non-numerical data ends up inadvertently loaded. Even harder to detect is when numerical data of an incorrect precision is stored. The same goes for date values, date + time values, geolocation data, among others. Storing data in the type that most closely represents its real use will avoid a lot of downstream problems that are often hard to diagnose and expensive to fix.

Non-nullable fields. Every major RDBMS platform supports non-nullable fields, which require a value in said field before an insert or update operation will complete. If a particular field must have a value before that record can be considered valid, marking the column as non-nullable can avoid data consistency issues where that value is missing.

Foreign key validation. The use of foreign keys for data validation is a best practice in most any relational database architecture, and that is doubly true when improving data quality is a main objective. Using foreign keys to limit values to only those entries explicitly allowed in the table referenced by the foreign key prevents stray values from sneaking into a constrained field.

Check constraints. Preventing the insertion of data with values outside a define range can be accomplished through check constraints found in every major database platform. These, like foreign keys, limit the values that can be entered for a column but the check constraint does not use a separate lookup table. Also, you have flexibility to set a range of allowable entries rather than a discrete list of values. An example of this would be using a check constraint to enforce that all dates of birth are on or after a certain date.

ETL cleanup. Most ETL tools have built-in functionality allowing for lightweight data cleansing. Assuming the data in question is being processed through a structured ETL tool, adding in logic to correct minor data quality issues is relatively easy to do with low risk. Emphasize a light touch here – you aren’t going to want to do address standardization or name deduplication without some formal codified process.

No Substitute for Formal Data Quality Processes

Even when taking these precautions to prevent or correct some issues, you’ll still run into cases where a more rigid and comprehensive data quality initiative will be needed. The above suggestions will not eliminate the need for proper data quality tooling, but can help reduce the pain from or delay the need for in-depth data quality remediation.

Conclusion

Data quality requires a multifaceted strategy. Taking care of some of the simple problems with easy-to-use tools already at your fingertips can have a significant and immediate impact on the quality of your data.

Data Warehousing: It’s About The Business

Data warehousing is about the businessData warehouses are complex creations. The ETL and data cleansing processes that sanitize and reshape the data, the relational database in which the data warehouse resides, the auditing routines verifying that the data is correct, and the reporting and analytics tools that sit atop the entire structure all come together to make an intricate but immensely valuable business asset. In fact, most of the effort and time on the project schedule are focused on the technical components of a data warehouse solutions.

As with any development project, careful attention must be paid to using best practices in putting together the bits for the solution. However, with data warehouse projects, too often the focus becomes the design and behavior of the technology. Focusing just on the technical aspects of a data warehouse solution is effective for heads-down coding of specific pieces, but does not work for managing the project as a whole.

When building a data warehouse, there is one critical point that must always be kept front of mind:

A data warehouse is a business initiative, not a technical one.

I’ve seen data warehouse initiatives go off track when the focus shifted away from the business needs. Through every aspect of the project – initial brainstorming, technical design, testing, validation, delivery, and support – the needs of the business must drive every decision made. While the data warehouse will be built using memory, disks, code, tables, and ETL processes, the primary goal of the project must remain clear: The data warehouse exists to answer business questions about the data. Anything contradictory to that is a barrier and must be removed.

When architecting a data warehouse solution, build it using the best technical design possible. But in all design decisions, remember the ultimate goal and audience of the final product. Data warehousing is about the business, not the technology.

Data Warehouse: On-Premises or Cloud?

I’ve been fielding this question a lot these days: “We’re building a data warehouse – should we build it here or in the cloud?” It’s a fair question, but it’s not the question that should be asked. The more appropriate question is this: “What part of our data warehouse solution should be in the cloud, and how does it work together with our on-premises data?”

Data Warehouse: On-Premises or Cloud?I shared a few of my thoughts on this topic a few weeks ago in a podcast interview with Carlos Chacon, when we discussed whether or not the on-premises data warehouse was dead. Without spoiling all of the details of that conversation, my short answer is that the on-premises data warehouse is alive and well but is no longer the only DW option.

As recently as three years ago, the cloud was still relatively new and not yet widely in use in most organizations. At the same time, companies selling cloud services were in the midst of a massive marketing effort to direct customers to the cloud. Microsoft famously declared themselves to be all-in on cloud well before the market was ready to follow. Many IT leaders and technologists bristled at the thought of being forced into the cloud at the expense of tried-and-true on-premises solutions.

However, in the past couple of years the message from cloud providers has softened. No more is it “cloud or bust”. Rather, cloud services companies – and Microsoft in particular – have reshaped the message to one in which the cloud is just one piece of a heterogeneous architecture that may include on-prem, PaaS, IaaS, and SaaS solutions. At the same time, consumers are realizing the value of cloud-based solutions for some of their architecture. Although I rarely have a client that wants to build an all-cloud infrastructure, most everyone I work with is at least exploring if not actively using cloud services for a portion of their data systems.

Cloud services are here to stay. No, the cloud absolutely will not take over on-premises data storage and processing. Rather, cloud offerings will be one more option for managing data and the code around it. So the question is not whether you should be in the cloud – the answer is yes (or it soon will be). The more practical question is how to best leverage cloud services as part of a hybrid strategy to minimize development time and total cost of ownership.

 

This post originally appeared in the Data Geek Newsletter.

Introducing the Pinch Hit Service

I am happy to announce the launch of a new service designed to help with very short term consultation needs. Although most consulting engagements are weeks or months in duration, we’ve discovered that some client needs are simple and do not require a traditional consulting approach. In response to this need, Tyleris has created the Pinch Hit service as a simple, no-commitment, 2-hour remote consultation.

The Pinch Hit was created to assist clients who are handling their own data warehousing, ETL, and reporting infrastructure. They may be looking for a second set of eyes to look at a problem, assistance with troubleshooting a specific problem, or a focused training session. Much like the use of a pinch hitter in baseball, Tyleris brings a specialized skillset to help deal with a clutch situation.

Not every business or technical need is suitable for this service, but in cases where the problem domain is narrow, the Pinch Hit can deliver outstanding value in a short time. If you find yourself in need of a Pinch Hit engagement from Tyleris, just let us know how we can help.

Request a Pinch Hit