Last Updated | May 12, 2026
Epic runs on three stacked data layers, Chronicles, Caboodle & Epic Clarity, with most healthcare organizations underusing at least one of them. Epic Clarity is the SQL-based reporting database that is present between Epic’s live clinical system and the enterprise analytics layer. This is where the majority of operational, clinical, and financial reporting actually happens. Over 415 organizations actively use Epic Clarity as a core analytics tool, & the ability to query and govern Clarity data is one of the most in-demand technical skills in healthcare IT. If you are evaluating Epic’s analytics stack, building out a reporting team, or deciding whether Epic Clarity is the right tool for a specific problem, this guide covers what you need to know.

What Is Epic Clarity?
Epic Clarity is a relational reporting database built on Microsoft SQL Server (or Oracle) that extracts data from Epic’s operational database (Chronicles).
It restructures it into a format designed for complex querying, analytical reporting, and integration with external business intelligence tools. It runs on its own dedicated server, separated from the live Epic environment, specifically so that heavy reporting queries do not affect clinical performance.
Epic Clarity is not a reporting interface. Analysts write SQL against it, connect it to tools like Tableau, Power BI, or Crystal Reports, and build the reports and dashboards that end up in front of clinical and operational leaders. Epic Clarity is designed for questions that require depth. Large date ranges, complex mathematics, multi-encounter cohort analysis, cross-system data integration, and audit-level granularity.
If your reporting need is real-time operational data, Clarity is the wrong tool. If your reporting need is a defensible quality measure, a longitudinal population analysis, or a financial model that traces cost to clinical events, Clarity is exactly the right one.
How the Epic Clarity Database Works
Understanding the Epic Clarity database starts with understanding how data gets into it. Epic’s live clinical system runs on Chronicles. It is a high-performance, hierarchical, non-relational database built on InterSystems’ Caché technology.
Epic Chronicles is optimized for speed, not for complex analytical queries. Running heavy SQL against Chronicles would slow the patient-facing workflows that clinicians depend on. Epic solves this by running a scheduled Extract, Transform, Load (ETL) process that moves data from Chronicles into Epic Clarity.
The ETL works in three steps:
- Extract: Data is pulled from Chronicles, where it is stored in Epic’s proprietary hierarchical format.
- Transform: The extracted data is restructured to fit a normalized relational schema, converting Epic’s internal data structures into standard SQL tables and columns.
- Load: The transformed data is loaded into Clarity’s relational tables, organized by SQL scripts and indexed for query performance.
At most health systems, this ETL runs nightly. At Penn Medicine, the ETL starts at midnight and is completed by 7:30 AM Monday through Saturday.
The practical consequence is a 24-hour data lag. Clarity always reflects yesterday’s state, not the live clinical environment. For quality reporting, finance, and population health analysis, this lag is an acceptable tradeoff. For operational command centers or real-time monitoring, it is a genuine constraint.
The Epic Clarity Data Model
The Epic Clarity data model is a highly normalized relational schema containing thousands of tables, each representing a specific domain of clinical or operational data.
The schema is structured so that data from Chronicles’ hierarchical records maps to relational rows and columns, a translation process that preserves data fidelity but requires analysts to understand both Epic’s data conventions and standard relational database concepts.
Epic Clarity Tables
A few tables appear in nearly every Clarity-based report, making them foundational knowledge for any analyst:
- PATIENT: Master patient demographics, including PAT_ID (primary key), PAT_NAME, birth date, and SEX_C. Most reports join back to this table.
- PAT_ENC: The patient encounter table. PAT_ENC_CSN_ID is the primary key, linking to visit records. This table is central to clinical reporting.
- ORDER_PROC: Procedure orders. Central to surgical, diagnostic, and lab reporting.
- ORDER_MED: Medication orders.
- ORDER_RESULTS: Results tied to orders, including lab values.
Tables in Clarity follow consistent naming conventions inherited from Chronicles. Category columns end in _C, yes/no flags end in _YN, and dates are stored in Epic’s internal format, days since December 31, 1840, a convention inherited from MUMPS, the programming language on which Chronicles is based.
These conventions are not intuitive to analysts coming from standard SQL environments, which is one reason Clarity has a real learning curve.
Tables ending in _2, _3, and so on are continuations of the same logical record, not separate entities, a common source of confusion for new Clarity analysts. Multi-valued fields in Chronicles (fields that could have multiple values per record) are represented in Clarity as separate line-item tables where each value occupies its own row, identified by a LINE column.
The Clarity Data Dictionary and Compass
- Epic provides two primary tools for navigating the Clarity data model. The Clarity Data Dictionary documents every table and column in the database, its data type, the Chronicles item it maps to, and its foreign key relationships.
- The Clarity Compass (accessible via the Epic UserWeb) shows the full catalog of available tables across the entire Clarity schema, though not every table shown in Compass will be present in a given organization’s database; whether a table is extracted depends on which Epic modules are licensed and configured.
- Entity Relationship (ER) Diagrams are a third tool. Visual representations of the relationships between tables within a specific reporting area, including join cardinality. They do not document all possible joins, but for a new analyst trying to understand how to connect encounters to orders to results, they reduce hours of trial and error.
Views and Derived Tables
- Beyond base tables, Epic ships a set of prebuilt views and derived tables, SQL objects that combine data from multiple base tables into a single, easier-to-query structure. These are considered data marts within Clarity: subsets of data organized for a specific reporting purpose.
- Well-run analytics teams build additional internal views on top of these, creating a governed semantic layer that protects report logic from schema changes during Epic upgrades.
Epic Clarity Reporting | What It Can and Cannot Do
1. What Does Epic Clarity Can Do?
Epic Clarity reporting excels at questions that require record-level granularity, longitudinal analysis, and auditability. Specific use cases where Clarity is the right tool:
2. Quality and Compliance Reporting
Measures tied to CMS, HEDIS, and internal quality programs require precise inclusion and exclusion logic, the kind that needs to trace back to individual encounters, orders, and timestamps. When quality scores affect reimbursement and public reporting, Clarity’s audit trail is what makes results defensible.
3. Population Health Cohort Building
Clarity supports building cohorts based on diagnoses, utilization patterns, medications, and outcomes across long time horizons. The data depth is what makes those cohorts clinically meaningful rather than demographically generic.
4. Financial and Operational Analysis
Finance teams use Clarity to connect clinical events to charges, denials, and cost drivers. Operational leaders use it to understand throughput, length of stay, and capacity utilization. The power is in combining clinical and financial domains in a single query.
5. Research Datasets
Clarity is frequently used as the source layer for IRB-approved research datasets. At Penn Medicine, research-purpose data requests require accompanying IRB documentation before access is granted.
6. BI Tool Integration
Clarity connects directly to Tableau, Power BI, Crystal Reports, and SAP Business Objects. Organizations layer their visualization and distribution platforms on top of Clarity’s data model, a separation Epic deliberately designed to give health systems flexibility in their BI stack.
What Does Epic Clarity Not Do?
Clarity is not real-time. Reports run against it will always reflect data that is at least one day old.
For use cases like live ED tracking, real-time bed management, or intraday staffing, Clarity is not the tool; Reporting Workbench (which queries Chronicles directly) is better suited for near-real-time operational visibility.
Clarity is also not self-service. Its normalized schema and thousands of tables make it inappropriate as an open environment where clinical staff write ad hoc queries. Unrestricted access increases the risk of performance degradation and inconsistent metric definitions across the organization.
The organizations that get the most out of Clarity treat it as a governed source layer: analysts build and maintain certified views, and end users access those views through validated reports and dashboards.
Epic Clarity vs. Caboodle: The Actual Difference
Epic Clarity and Epic Caboodle are not competing products. They are complementary layers designed for different purposes. The organizations that do best with Epic analytics use both, with clear governance about which tool answers which type of question.
What Is Epic Caboodle?
Epic Caboodle (formerly called Epic Cogito) is Epic’s enterprise data warehouse (EDW). It sits on top of Clarity in the data stack: its ETL pulls from Clarity (and can also accept non-Epic data sources), organizes that data into a dimensional model optimized for performance, and serves as the foundation for SlicerDicer, Epic’s self-service analytics tool, as well as executive dashboards and standardized enterprise KPIs.
Caboodle ships with a standardized healthcare data model, hundreds of pre-built tables and thousands of standardized data fields, and a built-in data dictionary. Epic maintains and updates this model through upgrades.
Organizations can extend Caboodle’s model with local tables and fields, and those extensions persist across upgrades, reducing technical debt compared to maintaining custom Clarity logic through every release cycle.
Epic Chronicles vs. Clarity and Caboodle
To understand where each tool fits, it helps to see the full data flow:
Layer |
System |
Purpose |
Data Freshness |
Operational |
Chronicles | Live clinical transactions, patient care |
Real-time |
Reporting |
Clarity | Complex SQL reporting, clinical audit, research |
Daily (ETL lag) |
Enterprise Analytics |
Caboodle | Dimensional dashboards, KPIs, and self-service |
Daily (ETL lag) |
- Chronicles is the source of truth for everything that happens in the live Epic system.
- Clarity exposes that data in a relational format for deep analysis.
- Caboodle further transforms Clarity data into a structure optimized for speed and usability at the enterprise level.
Caboodle also accepts external (non-Epic) data, claims, registry feeds, and third-party quality feeds, while Clarity is Epic-focused.
Epic Clarity vs. Caboodle: When to Use Each
Use Clarity when:
- You need encounter-level, record-level, or timestamp-level clinical detail
- The question requires audit-trail defensibility (quality measures, compliance reports, research datasets)
- You are building a cohort with complex inclusion/exclusion logic
- You need to combine clinical, operational, and financial data in a single analysis
- You are feeding a downstream data pipeline or AI model that needs raw clinical granularity
Use Caboodle when:
- You are building executive dashboards or standardized enterprise KPIs
- You need fast, self-service analytics (SlicerDicer runs on Caboodle)
- You are integrating non-Epic data sources with Epic clinical data
- You need a consistent, upgrade-safe data model that does not require constant maintenance
- The audience is non-technical, department managers, clinical leaders, and executives
What Is Epic Clarity Certification?
Epic Clarity certification is a professional credential awarded by Epic Systems that validates an individual’s competency in Clarity’s data model, SQL query development, and healthcare data management within the Epic environment. There are two primary Clarity credentials:
- Epic Clarity Data Model certification covers the Clarity schema, how Chronicles data maps to relational tables, the data dictionary, ER diagrams, and how to write and interpret SQL queries against Clarity.
- Epic Clarity Report Writer certification focuses on building validated, production-grade reports using Clarity as the data source, typically with Crystal Reports or other approved tools.
These are separate certifications with separate training tracks and exams. Job postings commonly require both, along with Caboodle certifications, for senior analytics roles.
How to Get Epic Clarity Certification
Epic certification is not publicly available. The process requires employer sponsorship; you must be employed by an Epic customer organization, an Epic partner, or Epic itself to access the certification program.
The typical path:
- Your employer nominates you and registers you through Epic’s training portal
- You complete the required online prework before the in-person or instructor-led class
- Training takes place at Epic’s headquarters in Verona, Wisconsin, or through Epic-approved remote programs (remote options have expanded since 2020)
- You complete hands-on build work and project experience validated by a project leader
- You sit a proctored exam, multiple choice and scenario-based questions covering the Clarity data model, SQL conventions, and Epic-specific database concepts
- On passing, you receive the credential from Epic
The training-to-exam process typically runs two to four weeks, though the timeline varies by role and prior experience.
Certifications require maintenance: as Epic releases upgrades (historically twice yearly), certified professionals participate in upgrade-readiness activities to keep credentials current.
Epic Clarity Certification Cost
The cost to Epic customers for training is built into Epic’s licensing and support structure rather than charged as a separate per-person fee.
For independent contractors or consultants pursuing certification through third-party training programs, the cost ranges from approximately $2,000 to $5,000, depending on the provider, program format, and included services like career coaching or exam prep. The broader Epic certification cost range across all modules is $500 to $10,000, depending on the credential and institution.
Epic Clarity certification is not available through self-study or independent online programs unless the individual has access to an Epic training environment through an employer relationship.
Epic Clarity and the Analytics Stack
Epic gives organizations SQL access to Clarity and expects them to layer their own BI tools on top. Common Epic integrations for clarity:
- Tableau is used for visual dashboards built on top of Clarity data, with published workbooks serving clinical and operational leaders
- Microsoft Power BI is increasingly common in Epic environments, particularly at organizations already standardized on the Microsoft stack
- Crystal Reports connects Epic’s traditional reporting tool for Clarity, still widely deployed; the Epic Clarity Report Writer certification is built around Crystal Reports
- SAP Business Objects is used at academic medical centers like Penn Medicine for scheduled report distribution; Clarity reports can be published to Business Objects servers for secure distribution via email or file share
Common Epic Clarity Implementation Failures
Based on what consistently goes wrong in production Clarity environments:
Fragile one-off queries
- Teams that treat Clarity as a self-service sandbox build hundreds of individual queries with hard-coded logic and no shared standards.
- When Epic releases an upgrade that touches the underlying tables, those queries break, individually, with no central fix. Organizations that build reusable certified views and a governed semantic layer before writing production reports avoid this problem entirely.
Performance degradation from poor query design
- Epic Clarity’s highly normalized schema means that unfiltered queries against large tables can run for hours, especially in organizations with years of historical data.
- Well-written Clarity queries apply filters early, use indexes, and avoid unnecessary full-table scans. Poor query governance, letting anyone run anything against production Clarity, is one of the fastest ways to degrade environment performance for everyone.
Conflicting metric definitions
- When multiple teams build their own Epic Clarity reports for the same metric (readmission rate, for example) without shared logic, they get different numbers.
- Leadership sees conflicting dashboards, stops trusting analytics, and the analytics program loses credibility. The fix is governance: standardized metric definitions documented in certified views, not re-derived in individual report queries.
How Folio3 Digital Health Supports Epic Integration
Folio3 Digital Health works with healthcare organizations across Epic’s clinical and analytics modules, including Clarity-based reporting builds, HL7 and FHIR integration between Epic and third-party platforms, and data pipeline design for organizations building AI and analytics capabilities on top of Epic data.
For organizations evaluating whether they are getting full value from their Epic investment, the questions we see most often are around Clarity governance, Caboodle adoption, and how to structure a reporting team that can actually scale.
If you are looking at ROI or assessing which hospitals use Epic and how they approach reporting, the analytics layer is where the most variability exists between organizations that are getting a return on their Epic investment and those that are not.
Our complete guide to Epic modules is a useful starting point for organizations still mapping out the full system landscape before addressing analytics.
Closing Note
Epic Clarity is a well-built reporting database that rewards organizations that treat it like infrastructure, with governance, certified views, shared standards, and a clear division of labor between Clarity and Caboodle. It is genuinely complex, not because SQL is hard, but because the combination of Epic’s data conventions, thousands of normalized tables, and clinical context takes time to internalize.
Organizations that invest in certified analysts, documented query standards, and a governed semantic layer get durable, defensible analytics that can scale. Organizations that treat Clarity as a free-for-all analytics sandbox tend to accumulate fragile query debt that becomes expensive to maintain through every Epic upgrade cycle. The technical investment in Clarity pays off. The governance investment pays off more.
Frequently Asked Questions
What is Epic Clarity reporting used for?
Epic Clarity reporting is used for quality/compliance measures, population health cohort analysis, longitudinal outcome tracking, financial analysis, and research datasets requiring record-level detail and audit-trail defensibility.
What is an Epic Clarity report writer?
An Epic Clarity report writer is a certified healthcare IT professional who builds production-grade reports using Clarity with tools like Crystal Reports, requiring strong SQL skills and Clarity data model knowledge.
How many tables are in the Epic Clarity database?
Penn Medicine’s Clarity implementation contains over 18,000 tables; actual numbers vary by organization based on licensed Epic modules, only extracted tables are queryable in a specific environment.
Can Epic Clarity integrate with Tableau or Power BI?
Yes. Epic supports SQL access to Clarity, allowing direct connection to Tableau, Power BI, Crystal Reports, and Business Objects for visualization and distribution.
What skills does an Epic Clarity analyst need?
Epic Clarity analysts need strong SQL skills, familiarity with Epic’s data conventions, experience with Clarity Data Dictionary and Compass, Clarity Data Model certification, and healthcare domain knowledge.
About the Author

Shalin Amir Ali
I am a Software Engineer specializing in digital health technologies, developing secure, cloud-based applications for telemedicine, health tracking, referral management, DICOM viewer applications for medical imaging, and HL7/FHIR integration. Passionate about AI-driven diagnostics and health informatics, I build solutions that enhance patient care and optimize clinical workflows. With expertise in Python, .NET (C#), React.js, Next.js, TypeScript, and JavaScript, I create scalable healthcare applications that seamlessly integrate with modern ecosystems.



