
This is the second of a four part series on self-service analytics. For the the other parts, please check out my substack.
Recap & Introduction
In Part 1, Self-Service Analytics Grounded in Reality - The Good, The Bad, and The Ugly, I described a “sweet spot” achievable when self-service analytics is pursued with a realistic perspective that puts people first. The emphasis on people throughout the post was no accident. While the data behind self-service analytics is exciting, success ultimately depends on people. This makes it the perfect challenge for applying a “People, Process, & Technology (PPT)” mindset!
Self-service analytics represents a paradigm shift from traditional working methods and requires buy-in from all parties involved. Business units, departments, and team members need to be invested both financially and emotionally, excited about the new possibilities! In the most successful implementations, data teams must reimagine their approach, viewing data as a product rather than just fuel for existing products. An approach of “build it, and they will come” will surely fail.
Let’s explore the essential questions you should ask when applying a PPT mindset to self-service analytics. We’ll discuss potential answers and their impact on your implementation.
The Questions Before the Questions
Before diving into specifics, consider your enterprise’s perspective. A self-service analytics architecture is typically just one piece of your overall technical landscape. Your answers here should align with your organization’s C-suite vision.
What is the value added by self-service analytics?
Sometimes we skip over the basic question: “Do we really need this?” Take a step back and consider whether self-service analytics is necessary for your enterprise right now. It might be worth adding to your roadmap without being an immediate priority, depending on your data’s maturity.
What is your enterprise’s data paradigm?
Most enterprises have an established guidance or vision for data in their roadmap. Your organization might prefer domain data ownership by individual teams (a data mesh approach) or prefer a central data team that maintains control. Neither approach is inherently right or wrong, but strong opinions exist. Consider these preferences when planning your design.
What are your priorities for self-service analytics?
Understanding why you want self-service analytics is crucial before deciding to pursue it. Are you aiming for faster decision-making, reducing IT bottlenecks, or empowering business users? After examining your enterprise’s people and processes, you might discover your original goals require more investment than anticipated or aren’t achievable in your current context.
People
I’ve already beat the dead horse that self-service analytics is a people-first problem, so let’s jump right into the questions you should consider.
What is the level of organizational buy-in?
This is perhaps the most crucial question. There is a reason there are countless sayings about people and product adoption: “you can lead a horse to water, but you can’t make it drink,” “the best product is one that gets used,” “tools are only as good as the hands that use them,” and “you can’t teach someone who doesn’t want to learn.” These all apply to self-service analytics.
This becomes especially relevant when the product depends on a users’ own initiative to use - it’s right there in the name: self-service! The work force using the tool needs to see genuine commitment from the entire organization, from top to bottom.
We’re talking about real commitment. Almost anyone would say yes to the sales pitch—“oh, I want THAT.” Things change when it is time to pay with the work and costs associated with those benefits. I, for example, want a six-pack… but not at the expense of pizza and cookies. The other questions we discuss will help you gauge what level of buy-in you truly have in the organization.
As a final note on this question, I want to share my “Semantic Layer - Hierarchy of Needs,” which I discuss in part 3 of this series: The Self-Service Analytics Tech Stack - Finding your Sweet Spot. I believe there is one driving force on deciding if you should do more advanced capabilities within your self-service technical stack - the investment and alignment from your organizations.
Who are your users?
Next, consider who will actually use your system. Here are some common personas:
- Business User - These individuals focus entirely on business operations and typically have limited technical experience. Some might want to run simple queries, while others may have zero interest in technical work—that’s why they chose their current career path. This group includes executives, account managers, sales teams, and other business units.
- Business Analyst Users - While some business users don’t directly work with data, they receive support from these analysts. A business analyst understands the business’s inner workings but likely isn’t running queries themselves. Instead, they refresh and interpret existing reports, working through IT for new questions.
- Business Super User - Though not usually a formal role, every large enterprise has these individuals. They’ve spent decades with the company and understand why everything works the way it does through their hard-earned experience. Their extensive domain knowledge sets them apart from typical business analysts.
- Data Scientist Users - These specialists have unique needs and skills compared to business analysts. Machine learning experimentation requires specific tools and different data formats than typical BI reports. Data scientists also focus on advanced statistical metrics that differ from standard business needs.
Even within each role, needs vary between departments. Consider not just the role itself, but the actual people in these positions to fully understand their capabilities, limitations, and requirements. The following table provides a starting point for role-based analysis.
| Persona | Self-service Analytics Capabilities | Output Tooling |
|---|---|---|
| Business User | - Simple and intuitive dashboards for key metrics - Pre-defined, canned reports - Guided ad hoc analysis (e.g., drag-and-drop interface) - Natural language query (NLQ) for asking questions in plain English - Alerts for specific thresholds or anomalies - Export to basic formats (e.g., Excel, PDF) | - BI tools like Power BI, Tableau (viewer mode) - Email/Slack alerts - Spreadsheet tools (Excel, Google Sheets) - Mobile-friendly dashboards or apps |
| Business Analyst User | - Advanced ad hoc querying and analysis - Creation of custom dashboards - Data visualization tools for trend and pattern identification - Ability to combine data from multiple sources - Scheduling and sharing of reports - Tools for lightweight data transformation and cleansing | - BI tools like Power BI, Tableau, Looker - Spreadsheet integrations - Export to CSV, Excel, or PDF - Collaboration tools like SharePoint or Confluence |
| Business Super User | - Advanced visualization and reporting capabilities - Data exploration with query flexibility - Access to metadata and semantic layers - Ability to create and publish certified datasets - Capability to set up data governance rules for self-service users - Lightweight predictive analytics tools | - BI tools with developer/admin capabilities (e.g., Tableau, Looker, Power BI Pro) - Data catalogs (e.g., Collibra) - SQL query tools (e.g., Dremio, Databricks SQL) - Dashboard publishing and distribution tools |
| Data Scientist User | - Advanced querying with SQL and other programming interfaces - Access to raw and pre-processed datasets - Tools for statistical analysis and machine learning - Support for notebooks (e.g., Jupyter, Databricks Notebooks) - Integration with AI/ML platforms - Ability to deploy models to the BI layer for operationalization | - Jupyter Notebooks, Databricks - Python/R development environments - Data exploration tools (e.g., Dremio, Starburst) - ML platforms (e.g., TensorFlow, PyTorch, Vertex AI) - Version control tools (e.g., GitHub, DVC) - BI integrations for embedding ML results (e.g., Looker, Tableau) |
What is your commitment to training?
You understand your “users of today,” but who are your “users of tomorrow”? It’s unlikely your team members can immediately jump into self-service analytics without help. If your enterprise isn’t willing to invest in training, they shouldn’t plan to invest much in the effort overall.
To maximize self-service analytics benefits, you’ll need business users who understand query languages and data models well enough to join data and calculate metrics. If your self-service system ends up handling all these joins and calculations, you risk leaving your sweet spot and entering the shaw’s principle zone discussed in part 1.
A word of caution about training: don’t be overambitious. As technologists, we might assume everyone wants to learn data querying. However, people make career choices intentionally. Forcing technical work on unwilling participants might drive them to seek opportunities elsewhere.
Who is your data team?
This answer connects back to your enterprise’s data paradigm. Your self-service analytics system must align with your data team’s structure. A large centralized team will have different capabilities than a small team supporting multiple cross-functional business units.
Consider what new roles you’ll need to make everything work (and ensure you have resources for training and hiring them)!
| Role | Description | Skills Needed |
|---|---|---|
| Data Steward | Ensures data quality, accuracy, and accessibility for self-service users, usually for a specific domain. Requires very strong knowledge of the business domain. | Data quality management, data profiling/cleansing tools, governance principles, SQL, interpersonal skills. |
| Semantic Data Modeler | Designs and maintains the semantic layer, ensuring accurate and intuitive data representations for analytics and reporting. | Semantic modeling expertise, SQL and database design, knowledge of BI tools (Power BI, Tableau, Looker), metadata management, data visualization principles, collaboration with business teams. |
| Data Product Owner | Manages the lifecycle of data products, aligning them with business needs and ensuring they deliver value to stakeholders. | Product management experience, understanding of data pipelines and architecture, stakeholder management, ability to translate business needs into data requirements, BI tool familiarity, Agile methodologies. |
Process
Organizational buy-in determines willingness to adjust existing processes or adopt new ones. Successfully implementing self-service analytics requires both.
Who pays for what?
As they say, “money is the root of all self-service analytics.” Well, not exactly, but you get the point. An enterprise must determine financial responsibility in this new operating model.
When business units start driving data requirements and running their own queries, how does this affect budgets? Should upstream data teams or end users pay for new calculated metrics? How are compute costs managed and controlled?
If business departments won’t invest financially, temper expectations about what self-service analytics can achieve.
Who to prioritize in onboarding and how to determine the roll-out of new features?
Hopefully you find yourself in a situation where business departments are fighting to be the first to have their data onboarded into the new platform. However, attempting everything at once through a big bang approach would likely be overwhelming. You’ll need a strategic onboarding plan.
This becomes especially critical if your enterprise is skeptical about self-service analytics. A quick win can transform answers to our previous questions and accelerate progress. Conversely, in risk-averse organizations, you’ll need to be pragmatic and avoid potential pitfalls.
| Roll-out Approach | Description | Pros | Cons |
|---|---|---|---|
| Role-Based Onboarding | Users are onboarded in groups based on their roles (e.g., Business Users, Analysts, Data Stewards, etc.), with training and access tailored to their responsibilities. | - Customized training enhances user satisfaction and adoption. - Clear focus on specific needs and use cases for each role. - Easier to implement governance and access control. | - Longer overall rollout time as groups are onboarded sequentially. - Interdependencies between roles may delay productivity for certain users. - Requires in-depth role analysis and preparation for each group. |
| Departmental/Team-Based Onboarding | Users are onboarded team by team or department by department, starting with teams that are most data-driven. | - Allows targeted focus on high-value departments to demonstrate ROI quickly. - Easier to manage and address team-specific challenges during onboarding. - Promotes collaborative learning within teams. | - Other departments may feel deprioritized, delaying broader adoption. - Cross-departmental reports or workflows may face challenges if one team lacks access. - Scaling success to less data-driven teams might require extra effort. |
| Use-Case-Driven Onboarding | Users are onboarded based on high-priority use cases or projects (e.g., specific dashboards or reports), addressing specific analytical needs first. | - Demonstrates immediate value by solving critical business problems. - Encourages user engagement through tangible outcomes. - Builds momentum and excitement for future phases. | - May overlook broader needs, leaving some users underserved initially. - Focused use cases may not cover foundational training needs for all users. - Success depends heavily on selecting the right initial use cases. |
| Pilot Group Onboarding | A small group of users from diverse roles/departments is onboarded first to test the platform, provide feedback, and serve as champions for broader rollout. | - Helps identify potential issues and refine the onboarding process. - Pilot users can act as internal advocates and trainers for others. - Reduces risk of a failed rollout by addressing feedback early. | - Slower rollout timeline as feedback loops and adjustments are needed. - Requires careful selection of pilot users to ensure diverse representation. - Pilot group may not reflect the needs of the broader user base. |
| Tiered Feature Release | Users are onboarded with access to basic features first, with advanced functionality rolled out over time. | - Reduces user overwhelm by focusing on foundational skills initially. - Allows the team to monitor adoption and usage of specific features. - Enables smoother troubleshooting and issue resolution in early stages. | - Users may get frustrated if they can’t access advanced features they need. - Advanced features may require retraining later, increasing overall time and effort. - Slower time-to-value for power users or analysts who rely on advanced capabilities. |
How to manage centralized data governance and access?
Self-service analytics inevitably involves balancing data governance and flexibility. Self-service aims to provide agility and distributed responsibility, which naturally leads to some data inconsistencies. Meanwhile, data governance strives to maintain a controlled, central version of data and metadata, which can slow down your self-service analytics goals.

Organizations often swing between extremes, going all-in on one direction until the drawbacks become apparent, then overcorrecting in the opposite direction. This cycle continues back and forth.
This complex challenge rarely has a perfect solution. Instead, recognize the limitations of any approach and choose the one that best serves your needs.
What is the operating model?
Without detailed knowledge of your organization, I can only explore this through additional questions. The answers combine insights from all previous considerations:
- What do data teams produce?
- Will they create views to hand off to business units, or work collaboratively with business teams to develop data products?
- What does the new reporting infrastructure look like?
- How are reports validated?
- Who manages the promotion of reports from individual to enterprise-wide use?
- How are terms and metrics defined across the enterprise?
- How do we continuously improve reports and handle change management?
- How do we gather and incorporate user feedback to enhance tools?
- How will we support users in creating queries and interpreting results?
The end result should be a team ready to tackle your self-service analytics journey. Now we can finally turn to the third piece of the puzzle…
Technology
Your people and process dictate your technology investment. The first step is to determine the key features needed by identifying the gaps in your own technical stack.
What are the features you need based on people, process, and current technical landscape?
Take another look at your personas and the training you’re willing to invest in them. From there, make a list of the capabilities you need. Prioritize them based on organizational needs, and identify your gaps.
Use your final list of capabilities to drive forward your target state.
What is your new technology stack?
Not so fast! Let’s take a beat to fully digest our people and process questions before diving into this topic. As a quick recap:
- Self-service analytics success is fundamentally a people-first challenge—not a technology one—requiring genuine organizational buy-in at all levels and realistic assessment of different user personas’ needs (from business users to data scientists) and their willingness to engage with data.
- Implementation requires careful balance between governance and flexibility, with clear answers to critical questions about funding (who pays for what), rollout strategy (how to phase in users and features), and operating model (how teams will work together and support users).
Now, you have an idea of what self-service analytics looks like to your organization. Your sweet spot. Next, we’ll discuss finding the right tech stack for your needs: The Self-Service Analytics Tech Stack - Finding your Sweet Spot.
#blog-post #technical-deep-dive