I wrote in my previous post how companies should try to to cultivate a LinkedIn-style culture of collaboration around analytics so business users throughout the organization can stay engaged, energized and accountable when it comes to data.

I included the caveat that this do-it-yourself urge among employees to innovate and experiment should be harnessed, but only if you can ensure your underlying data remains accessible. I think it’s worth expanding on this point, since the relationship between these two worthwhile goals – user engagement and data accessibility – is one that can easily veer into conflict.

Arrows-Data-Anarchy-blog.jpg

Think about it: We hire people to use their intelligence to take action for the benefit of the company. We pay recruiters top dollar to get folks who think creatively and innovate through solving challenges. These are the qualities that drive success; they’re also the qualities that drive the go-getters, in the face of slow moving IT and analytics solutions, to set up their own isolated data marts to solve particular problems for their business units. Unfortunately, doing so opens a Pandora’s box of bigger problems, chiefly the spread of “data anarchy,” or the “Wild Wild West” of big data siloes and data marts. This inevitably leads to ballooning IT costs to handle all the redundancies, as multiple departments copy and alter data, and ultimately deal with Data Drift, which is essentially a lack of data accuracy.

Nails in the Data Mart Coffin

While the road to data anarchy may be paved with good intentions, it still leads us to the point where more than 75-percent of people’s time can be spent sifting through data, rather than making data-driven decisions. Consider the daily or weekly fire drills that take place in the CEO’s office when numbers from two departments – say marketing and finance – don’t match up and the circular arguments begin over which data are right, and which data are wrong. Users and technology executives alike can spin their wheels if the organization doesn’t have systems and policies that are inclusive and effective around data.

While the road to data anarchy may be paved with good intentions, it still leads us to the point where more than 75% of people’s time can be spent sifting through data, rather than making data-driven decisions. #SentientEnterprise

It’s no wonder that users get frustrated. In “Drive,” his bestselling book on workplace motivation, author Daniel Pink explains how scientists have developed a “new operating system” for business success that revolves around three elements: autonomy, or the urge to direct our own lives; mastery, the desire to get better and better at something; and purpose, the yearning to do something that matters.

How many of these qualities can you expect to find in the earnest employee who just went the extra mile for the company, building a data mart silo only to later realize that the effort set the company back? The extra mile was traveled in reverse. When it comes to the culture around data analytics and business intelligence, it’s not hard to think of these latest insights on workforce motivation as additional nails in the data mart coffin.

The key to solving this problem is building a data platform that can remain the source for all future analytics and applications, which can change and evolve over time. Here’s where the Virtual Data Mart comes in. A Virtual Data Mart is a staging area with real-time, self-serve characteristics that looks and feels like a traditional data mart to the end user, but that is designed to allow such experimentation to happen while protecting the underlying data. It also allows new data or new intermediaries to be added in an agile manner without the need to copy core data at all.

Because multiple users in the organization can create VDMs simultaneously in real time, you can get centralized access for decentralized use cases across the organization. As a result, the data remains accurate, clean and flexible. Anyone in the enterprise can request and analyze that data, anytime and anywhere. This is the kind of framework that enables teams across an organization to ask complex questions that drive insights and innovation at scale. Without veering into a technical deep-dive, I think it’s important to stress that businesses need to back up the vision with production-grade architectures that can execute the approach at the enterprise level. My own approach – part of the Sentient Enterprise methodology I advocate — is what I call a Layered Data Architecture. In a nutshell, it’s a framework that makes the organization’s data assets safely available via multiple layers of access and complexity to accommodate everyone from the die-hard data scientist to the casual business user.

A Layered Data Architecture is one way to provide frictionless, self-service analytics for everyone while still controlling access to data and the rules that govern that data. Whatever your specific approach may be, your benchmark for success is whether you were able to put a stop to the data anarchy and related pitfalls that can cripple your organization.

Click here, for more on the Layered Data Architecture and a primer on when to put what data where.

Oliver Ratzesberger

Oliver Ratzesberger is an accomplished practitioner, thought leader and popular speaker on using data and analytics to create competitive advantage – known as his Sentient Enterprise vision. Oliver Ratzesberger is chief operating officer for Teradata, reporting to Vic Lund.

Oliver leads Teradata’s world-class research and development organization and provides strategic direction for all research and development related to Teradata Database, integrated data warehousing, big data analytics, and associated solutions.

View all posts by Oliver

Related Posts