Continue reading the complete article here — free access.

In this fifth chapter of our journey through the ISO 25000 family, we continue the evolving story of Harvey, the developer whose humble webmail service has grown beyond all expectations.

Harvey's story began in Part 1, when he set out to build a simple webmail service — only to discover that working code wasn't enough. His journey into software quality started with a sobering realization: users need more than just functionality — they need trust, usability, and resilience.

Over the following three chapters, Harvey explored the ISO 25010 quality model, gradually transforming his product from functional to formidable:

  • Part 2 focused on Functional Suitability and Performance Efficiency.
  • Part 3 explored Compatibility, Usability, and Reliability.
  • Part 4 addressed Security, Maintainability, and Portability.

Together, they built a product that was not only fast and feature-rich but also trusted, adaptable, and scalable. However, Harvey now faced a different kind of challenge.

The product worked beautifully — but the data? That was another story. Support tickets began to surface, as analytics dashboards displayed conflicting figures. Some user profiles had missing fields. Duplicates crept into contact lists. One user summed it up: "Your system works well, but the numbers don't always add up." Harvey realized something crucial: a system is only as good as the data it delivers.

Enter ISO 25012 — the standard that shifts the spotlight from software quality to data quality.

None
AI Generated: Data weighs heavy — Harvey realizes that without trustworthy information, even the best systems fail.

What is ISO 25012?

While ISO 25010 defines the quality of software systems, ISO 25012 specifies the quality of data within those systems. According to the standard, data quality is the degree to which data satisfies the requirements defined by its intended use. In practice, this means that data must not only be technically correct but also practical, trustworthy, and secure.

ISO 25012 divides data quality into two broad categories:

  • Inherent Data Quality: properties intrinsic to the data itself
  • System-Dependent Data Quality: properties shaped by how the system manages, stores, or presents the data

Let's explore each in detail through the lens of Harvey's evolving platform.

🔹 Inherent Data Quality

Inherent Data Quality refers to the characteristics of the data that are independent of how it's processed, stored, or displayed. It focuses on the intrinsic value and reliability of the data, regardless of the system in which it is used.

For Harvey, this meant looking past the software features and asking:

"Is the data itself accurate, complete, and trustworthy?"

Imagine an email address that's misspelled, a timestamp from the wrong timezone, or a message marked as "read" that was never opened. These issues aren't caused by system performance or the user interface — they stem from the quality of the data itself.

ISO 25012 defines five key attributes under Inherent Data Quality:

  • Accuracy: Is the data correct and precise?
  • Completeness: Is all necessary data present?
  • Consistency: Is the data free from internal contradictions?
  • Credibility: Can the data be trusted as correct, accurate, and objective?
  • Currentness: Is the data up to date?

Each of these plays a vital role in whether users can rely on the information they see. In Harvey's case, even the most well-built system would lose credibility if it delivered flawed or outdated data.

None
Inherent Data Quality and Sub-categories

🔹 Inherent Data Quality → Accuracy: Getting the Facts Right

Accuracy means that data correctly reflects real-world values without error. For Harvey's webmail system, now embedded in enterprise workflows, that definition carried weight. Misdirected invoices, misaddressed emails, or outdated customer records weren't just annoying — they risked trust and business continuity.

After several incidents — bounced messages, duplicate names, and mismatched customer records — Harvey realized that accuracy couldn't rely on user input alone.

A simple typo, such as "John Smith" vs. "Jon Smith," might not trigger a system error, but it could derail a contract, confuse the recipient, or cause CRM workflows to link the wrong profile.

So, Harvey's team introduced layered safeguards tailored to the CRM context:

  • Business rule validation: Emails sent via CRM workflows now trigger pre-send checks. If a customer's name didn't match the record on file or lacked a required field (such as country or contact role), the system flagged it.
  • Cross-field consistency: A client marked as "enterprise" couldn't have missing company details. A contact with a personal Gmail address couldn't be the main billing contact without additional confirmation.
  • Customer confirmation loops: When critical profile data was updated, recipients were asked to review and approve the changes before workflows resumed.
  • Bounce recovery workflows: If emails bounced, the associated record was flagged as "accuracy risk" and routed for cleanup before it could be used again.

For Harvey, accuracy wasn't just about being precise — it was about being responsible. It meant building workflows that asked the right questions, caught inconsistencies early, and ensured that data accurately reflected the business reality it was intended to represent.

🔹 Inherent Data Quality → Completeness: No Holes in the Record

Completeness ensures that all required data is present for a given use case. While accuracy is about correctness, completeness is about presence. You can't validate or act on what isn't there.

In Harvey's CRM-integrated webmail system, incomplete data silently broke workflows. For example:

  • A billing approval failed because the "Department" field was blank.
  • An escalation rule didn't trigger because a user lacked a defined role.
  • A quarterly report skipped clients marked as "enterprise" — because many of them had no country or region specified.

These weren't bugs in code — they were gaps in data. Those gaps blocked automation, confused users, and delayed critical decisions.

To tackle the problem, Harvey's team extended the safeguards they had begun using for accuracy:

  • Context-aware forms: When creating a new contact or account, the UI dynamically requires fields based on profile type. For example, selecting "Enterprise Client" would make "Department," "Country," and "Contract Owner" mandatory.
  • Backend enforcement: Even if users bypassed the UI, the API layer validated the presence of essential fields and rejected incomplete submissions with clear error messages.
  • Cross-field logic: If a user marks a contact as "Primary Billing Contact," the system checks for address, phone, and department fields and flags any omissions.
  • Admin dashboards for cleanup: Incomplete records were visually flagged and queued for follow-up, allowing support staff to correct issues before they created downstream problems.
  • Customer confirmation: If the system detects a missing field during a workflow (such as sending a contract), it can pause the process and prompt either internal staff or the customer to complete the data.

Completeness doesn't just reduce friction — it protects the business logic that depends on complete inputs. Harvey learned that even the best automation is only as good as the data it has to work with.

🔹 Inherent Data Quality → Consistency: One Source of Truth

Consistency ensures that data remains uniform and synchronized across all representations, whether in storage, in transit, or user-facing views. It means that, regardless of where or how a user accesses the system, they see the same data, reflecting the same reality.

Harvey began noticing troubling inconsistencies:

  • A user's subscription plan was displayed as "Premium" in the CRM but as "Basic" in the billing portal.
  • A support rep updated a customer record, but the change wasn't reflected in analytics until the next day.
  • Even worse, a user reads an email on their desktop, only to open their phone and find the message unread, or worse, missing altogether.

These disconnects didn't just confuse users — they eroded trust.

To restore confidence, Harvey's team made consistency a priority across all layers:

  • Centralized business logic: Updates to critical fields like account status, plan type, or billing details were routed through a single source of validation, reducing divergence across modules.
  • Unified APIs and service layers: Both internal components and external integrations (such as the CRM and mobile app) used the same interfaces for reading and writing data, ensuring that updates propagated reliably.
  • Cross-device synchronization: They implemented real-time sync and event-driven updates, ensuring that changes on one device were immediately reflected on others, regardless of whether they were made on a mobile device, tablet, or desktop.
  • Background reconciliation jobs: Scheduled processes regularly scanned for anomalies — mismatched states, delayed updates, or desynchronized records — and either auto-corrected them or flagged them for review.

For Harvey, consistency wasn't just about data storage — it was about user experience, confidence, and predictability. In a world of multi-device, multi-user systems, consistency is what turns data from a guess into a guarantee.

🔹 Inherent Data Quality → Credibility: Can You Trust It?

Credibility speaks to how believable, transparent, and reputable the data appears, not just whether it's correct, but whether users trust it enough to act on it. This trust is especially fragile in systems like Harvey's, where data flows in from multiple sources and ultimately influences decisions, dashboards, and audits.

Harvey first saw credibility crack when:

  • A misconfigured CRM sync inflated usage stats in client dashboards, showing double the actual activity.
  • A client's regional manager spotted a chart that contradicted an internal report — and immediately questioned the entire system.
  • Admins began asking: "Where did this number come from? Can we rely on it?"

Harvey realized that even accurate data can lose trust if users don't understand its origin, reliability, or freshness.

So, his team went to work strengthening credibility across the board:

  • Provenance tracking: Every dashboard value was annotated with its source (e.g., "Synced from CRM X at 03:17 UTC"), clearly indicating the origin of the data.
  • Timestamping and freshness indicators: Key metrics included "Last updated" tags so users knew how current the data was.
  • Anomaly alerts: If data feeds failed, returned unexpected values, or showed suspicious spikes, admins were notified automatically. This transparency stopped insufficient data from quietly eroding trust.
  • Sync logs and verification hooks: For enterprise clients, the system exposed integration logs and even allowed periodic verification runs to confirm alignment with the source systems.

For Harvey, credibility wasn't just about accuracy — it was about confidence. When users can see where data comes from, how it was processed, and whether it's behaving as expected, they're far more likely to believe in what the system tells them.

🔹 Inherent Data Quality → Currentness: You're Looking at Now, Now

Currentness refers to whether data is up to date, reflecting the real-world state as it exists now, not as it was days or weeks ago. It's about timeliness, not just correctness. A data point might be technically accurate when it was entered, but if it no longer reflects reality, it becomes a liability.

Harvey learned this the hard way when:

  • A client noticed that an employee marked "active" had left the company three weeks earlier.
  • Marketing emails were sent to users who had been deactivated.
  • Reports included outdated sales leads, which skewed forecasts and damaged trust.

The issue wasn't accuracy — it was outdated.

To address this, Harvey's team focused on making data current across all critical workflows:

  • Scheduled syncs were established between the webmail platform and external systems, such as the HR database and CRM, ensuring that updates were regularly pulled in.
  • Data freshness indicators were added to the UI ("Last updated 12 minutes ago"), providing users with clarity on the recency of each piece of information.
  • Stale record detection flagged entries that hadn't been updated within expected timeframes, especially for user roles, permissions, or billing contacts.
  • Review loops and auto-expiry were introduced: if a record hadn't been touched in 90 days, the system marked it for review or temporary deactivation.

Harvey realized that currentness is a moving target. The system had to stay in sync with the outside world or risk becoming a source of outdated and misleading information.

Because in business, yesterday's truth can be today's risk.

🔹 System-Dependent Data Quality

While Inherent Data Quality focuses on characteristics that are intrinsic to the data itself — such as accuracy, consistency, and completeness — System-Dependent Data Quality concerns how the system manages, delivers, and protects that data. It's not just about what the data is, but how it's used, stored, accessed, and presented.

You can think of inherent quality as the truth embedded in the data, and system-dependent quality as the reliability of the system that delivers that truth.

For example, a customer record might be perfectly accurate and complete, but if the system fails to make it available when needed or exposes it to the wrong person, its value—and quality—plummets.

System-dependent attributes, such as accessibility, confidentiality, efficiency, and recoverability, reflect the operational reality of how the data behaves in context. They ensure that trustworthy data stays trustworthy throughout its lifecycle — whether at rest, in motion, or during a crisis.

Let's explore these system-dependent qualities through Harvey's ever-expanding platform.

None
System-Dependent Data Quality and Sub-categories

🔹 System-Dependent Data Quality → Accessibility: Where Simplicity Meets Power

Accessibility refers to the degree to which authorized users can locate, retrieve, and utilize data when needed, without unnecessary delays or friction. It's not just about having the correct permissions; it's about discoverability, speed, and ease of use.

For Harvey's platform, accessibility meant ensuring that sales representatives could quickly view customer profiles, support agents could retrieve message logs without having to dig through raw data, and administrators could audit records effortlessly.

To support these goals, the team introduced a range of improvements:

  • Role-based access control (RBAC): Users only saw what was relevant to their roles — Admin, Support, Sales, or otherwise.
  • Innovative search interfaces, featuring auto-complete, contextual filters, and grouped dropdowns, make it easy to navigate large datasets.
  • Optimized indexing: Frequently queried fields, such as email address and client ID, were indexed to ensure fast lookups, even at scale.

Harvey quickly realized that making data accessible wasn't just a backend concern — it was a critical part of the user experience. After all, data that can't be found might as well not exist.

And that's where user interface design came into play. A powerful search engine is meaningless if users can't figure out how to use it. Harvey's team discovered that crafting a simple, intuitive UI for data access was one of the most challenging tasks of all. What now feels obvious — grouping filters, adding tooltips, aligning layouts with workflows — took multiple iterations, usability tests, and design rewrites. Making access feel effortless is anything but.

Great UX hides complexity. It invites users in rather than intimidating them. In Harvey's world, improving accessibility meant aligning deep technical efficiency with thoughtful, human-centered design.

🔹 System-Dependent Data Quality → Compliance: Built to Respect the Rules

Compliance ensures that data is collected, stored, and processed by applicable laws, regulations, industry standards, and organizational policies. It's not just a legal necessity — it's a cornerstone of trust.

For Harvey's platform, compliance began with GDPR. European users expected (and deserved) transparency and control over their data. Harvey's team implemented:

  • Consent tracking with explicit opt-in and purpose declarations
  • Right-to-access, right-to-erasure, and data export workflows, accessible from user settings
  • Retention policies are enforced at the database level, automatically archiving or deleting records based on user roles and data age
  • Audit logs for key actions involving personal data (e.g., profile updates, consent revocations)

But GDPR wasn't the only framework that mattered. As Harvey's product expanded globally and moved into new industries, the team needed to support additional standards:

  • CCPA (California Consumer Privacy Act) for US-based clients
  • HIPAA safeguards for healthcare clients ensure that patient data remains protected.
  • SOC 2 requirements for enterprise clients demanding third-party verified controls around security, availability, and privacy
  • PCI-DSS practices for any integrations involving payment data

UI design also played a critical role. To truly support compliance:

  • Data privacy choices had to be visible, understandable, and accessible — not buried in footnotes or dark patterns
  • System messages needed to clarify why data was collected and how it would be used
  • Admins needed control panels to define retention rules, export data, and manage consent lifecycles without writing SQL

Harvey came to understand that compliance wasn't just a legal checkbox. It was a daily design challenge — an ongoing collaboration among legal, engineering, and UX teams. The result? A platform that didn't just follow the rules — it respected users while doing so.

🔹 System-Dependent Data Quality → Confidentiality: Don't peek

Confidentiality ensures that sensitive data is protected from unauthorized access, whether accidental or malicious. It's a cornerstone of both data quality and user trust, because no one cares how accurate or complete their data is if it falls into the wrong hands.

For Harvey's platform, confidentiality became even more critical as the webmail system expanded into industries like legal, healthcare, and finance. These sectors required more than generic protections — they demanded strict guarantees that personal, financial, and proprietary data would remain confidential at all times.

While foundational safeguards were covered deeply in Security (Part 4), Harvey's team reinforced confidentiality at the data layer with practices including:

  • Data masking: Personally identifiable information (PII), such as emails, phone numbers, and names, was redacted in logs and admin views unless explicitly required.
  • Encryption everywhere: Not just for backups, but also for logs, database fields, and internal API traffic — using keys managed via a secure vault.
  • Role-based access control (RBAC): Every internal tool and dashboard was scoped according to the principle of least privilege, ensuring users saw only what they needed and nothing more.
  • Monitoring and alerting: Any access to sensitive records triggered audit logging, and anomalies (e.g., repeated access attempts or significant data exports) were flagged for review.

Confidentiality wasn't only enforced in code. Harvey's team developed training materials for internal staff, outlining security best practices, phishing detection, and safe handling of user data, recognizing that people, not just software, play a role in data protection.

As clients became more compliance-savvy, Harvey found that confidentiality wasn't just a security feature — it was a selling point. Demonstrating how data was protected earned trust, reduced audit risk, and opened doors to enterprise contracts.

🔹 System-Dependent Data Quality → Efficiency: Warp speed

Can the system process, store, and serve data without consuming excessive resources? Efficiency in data quality is about more than just performance benchmarks — it's about ensuring that every operation involving data (retrieval, processing, aggregation) is streamlined, scalable, and cost-effective.

For Harvey's team, data inefficiency surfaced in subtle but costly ways:

  • Analytics dashboards slowed to a crawl under large queries.
  • Background jobs to clean or export CRM records blocked other system processes.
  • API calls overloaded the server with redundant data fetching.

To tackle this, they optimized database queries, batched large operations, cached frequently used results, and moved computationally intensive tasks to asynchronous workers.

Many of these performance-related efforts were already discussed in Part 2, where Harvey tackled response time, resource usage, and capacity. But when viewed through the ISO 25012 lens, efficiency also reflects how data design and architecture influence those metrics.

By eliminating wasteful access patterns, normalizing data models, and scaling infrastructure intelligently, Harvey's team ensured that their system stayed fast, not just when it was small, but as it grew.

🔹 System-Dependent Data Quality → Precision: The Right Level of Detail

Precision ensures that data is captured and stored at a level of detail appropriate to its intended use. It's not about being overly exact — it's about being exact enough.

In Harvey's webmail system, this became particularly important in billing records, timestamped logs, and CRM data exports. To address recurring issues with rounding errors and inconsistent time formats, his team implemented:

  • Stricter typing for numeric fields, especially in financial calculations.
  • Decimal precision enforcement to avoid discrepancies in currency values.
  • Standardized timestamp formats with appropriate granularity for different modules — seconds for user activity, milliseconds for audit logs.

Every domain has its precision demands. A grocery system might weigh apples to the nearest tenth of a kilogram, while a pharmaceutical database might need precision down to micrograms. Overengineering precision can slow systems and confuse users; underengineering it can lead to costly mistakes.

Harvey learned that precision is about relevance — capturing just enough detail to support accuracy, clarity, and reliability in decision-making.

🔹 System-Dependent Data Quality → Traceability: Following the Data Breadcrumbs

Traceability refers to the ability to track the origin, movement, and transformation of data across the system. It's essential for audits, debugging, compliance, and trust, especially in complex, multi-stakeholder environments like Harvey's webmail and CRM integration platform.

As the platform evolved, data flowed in from multiple sources: user input, synced CRM records, third-party integrations, and automated workflows. Without clear traceability, questions like "Where did this value come from?", "Who changed it?" or "What was it before?" became nearly impossible to answer.

To address this, Harvey's team implemented several key mechanisms:

  • Audit trails: Every change to critical data (e.g., customer status, subscription plan, billing info) was recorded with timestamps, user IDs, and old/new values. This allowed the team to understand not just what happened, but when, how, and by whom.
  • Source tagging: All imported data included metadata identifying its origin, such as "UserInput", "CRM_Sync", or "API_PartnerX". This helped pinpoint problematic integrations or identify manual overrides.
  • Versioned records: For certain sensitive entities (e.g., contracts, profile configurations), the system stores historical snapshots. This allowed rollback to previous states and made change histories visible in the UI for transparency.
  • Change correlation: A trace ID was attached to all operations within a workflow, tying together logs, database updates, and API calls so that incidents could be reconstructed end-to-end.

Traceability is what turns data into a story. It doesn't just show what the data is — it shows how it got there. And in a system that powers business decisions, billing, and compliance, that story must always be available when needed.

🔹 System-Dependent Data Quality → Understandability: What?!

Even when users can access data, the next question is: Do they understand what it means? Understandability refers to the degree to which humans and systems can accurately interpret data. It's about clarity, naming conventions, and removing ambiguity from the datasets that drive decisions.

In Harvey's webmail platform, poor understandability was causing friction across teams. Support agents couldn't tell the difference between "status_code: 3" and "status_code: 4". Engineers were misinterpreting internal flags. Automated scripts sometimes failed due to unexpected values in fields.

To fix this, Harvey's team focused on improving semantic clarity:

  • Clear field names and descriptions: No more vague labels like val1 or flag. Every data element had a purpose, and that purpose was documented.
  • Human-readable enums: They replaced cryptic values like status = 2 With clear labels like "Pending Approval".
  • Shared data dictionaries: Teams across engineering, analytics, and support relied on a single source of truth that defined each field, its meaning, format, and allowed values.

Understandability complements accessibility. One gets you to the data; the other helps you use it correctly. Without both, even accurate and complete data can lead to confusion, misinterpretation, or bad decisions.

🔹 System-Dependent Data Quality → Availability: Data When You Need It

Availability refers to whether data is accessible when needed, not just in ideal conditions, but also during failures, maintenance, or periods of high demand. It's one of the most visible aspects of data quality: if users can't reach the data, nothing else matters.

In Part 3, we explored Availability as a core sub-characteristic of Reliability under ISO 25010. However, in the context of ISO 25012, the focus shifts from system uptime to data uptime. Even if the application is running, degraded or missing data can disrupt workflows, reporting, and decision-making.

Harvey's platform had to ensure that critical data — such as customer messages, billing records, and audit logs — remained accessible and intact at all times. To achieve this, his team implemented several safeguards:

  • Redundant backups: Automatically scheduled and geo-distributed to protect against data loss.
  • Cross-region replication: Ensuring that if one data center went down, another had a live copy ready.
  • Graceful degradation: If parts of the system failed, fallback mechanisms preserved core functionality (e.g., showing cached inbox data when real-time sync was unavailable).

Availability isn't just a checkbox. It's about designing for the unexpected — and ensuring that users never experience the panic of missing data. In Harvey's world, building trust meant ensuring the data was always there when needed, even under pressure.

🔹 System-Dependent Data Quality → Portability: Across Borders, Timezones, Units, and Formats

In a connected world, data rarely stays in one place. It's extracted, transformed, exported, integrated, analyzed, and visualized across an ever-growing constellation of tools, environments, and stakeholders. In ISO 25012, portability refers to the ease with which data can be moved, interpreted, and reused across systems without losing its meaning or structure.

Harvey's platform had to support this reality. Clients weren't just using the webmail service directly — they were syncing it with CRMs, embedding analytics in dashboards, and exporting reports for global teams. To meet these needs, the system was designed with portability in mind.

Harvey's team implemented:

  • Standardized export formats such as CSV, JSON, and XML to ensure broad compatibility.
  • Versioned and documented APIs, allowing third-party tools to integrate reliably over time.
  • Interoperability layers for seamless connections to analytics platforms, spreadsheets, and enterprise data pipelines.

But true portability also means accounting for global diversity. Harvey's system handled:

  • Time zone normalization, storing all dates in ISO 8601 UTC format while presenting them in each user's local time.
  • Language support through UTF-8 encoding and flexible interface localization, including support for right-to-left scripts.
  • Locale-aware formats, adjusting number separators, currency symbols, and date styles based on user or organization preferences.

Terminology and field definitions were maintained consistently and shared through central data dictionaries, ensuring that exported data could be interpreted correctly by other teams or systems without ambiguity.

It's worth noting that this notion of data portability is distinct from the system portability discussed in Part 4. There, the focus was on how the software itself could run across platforms, from cloud clusters to mobile devices. In contrast, ISO 25012's portability is about ensuring that the information produced by the system can move freely and retain its value, regardless of where it is ultimately used.

By treating data as a citizen of many environments, Harvey's platform empowered users to take their information anywhere and still make sense of it.

🔹 System-Dependent Data Quality → Recoverability: Restore with Confidence

Even with the best architecture, things can go wrong — disks can fail, code can break, and accidents can happen. When they do, data recoverability is the safety net that ensures your system can bounce back without permanent loss. In ISO 25012, recoverability refers to the system's ability to restore data to a consistent, usable state after a failure, whether caused by hardware issues, software bugs, human error, or cyberattacks.

For Harvey's team, this wasn't theoretical. One server outage corrupted a segment of the customer database — thankfully, they had a tested recovery plan. From then on, recoverability became a priority.

They implemented:

  • Versioned backups enable rollbacks to specific points in time, preserving data history for audit and debugging purposes.
  • Restore simulations and regularly test the ability to recover data from backups under real-world conditions.
  • Disaster recovery policies outline the roles and responsibilities in crisis scenarios and ensure redundancies across data centers and cloud zones.

This form of recoverability directly supports the broader system reliability discussed in Part 3, where ISO 25010 defines Recoverability as a subcharacteristic of system behavior under fault conditions. While ISO 25010 focuses on whether the system can resume functioning after failure, ISO 25012's concern is whether the data within that system can be reliably restored and trusted again.

In Harvey's case, the two went hand in hand: a resilient platform wasn't enough if the data it ran on couldn't be recovered. Together, they ensured that even in the face of disruption, service continuity and data integrity could be restored with confidence.

Final Thoughts: Data is the New Interface

Software is only half the equation. The other half — the more fragile half — is the data. Harvey learned that user trust isn't just earned through security and features. It's maintained through clean, correct, and credible data.

In ISO 25010, quality shaped how the system behaved. In ISO 25012, quality shapes what the system actually says.

With proper data quality, the dashboards don't lie. The exports don't fail. The analytics aren't misleading.

Harvey's team wasn't just improving their product anymore. They were improving every insight drawn from it.

Coming Next

Part 6: Evaluating Quality Processes with ISO 25040, CMMI, and ISO 9001 As Harvey's platform matured, so did the questions from enterprise clients — not just what the system does, but how it was built, tested, and improved. Procurement teams now require structured evidence: What are your quality assurance processes? How do you handle defects? How do you measure improvement? In Part 6, we shift the spotlight from product outcomes to process discipline. It's time to explore how quality is evaluated — not just in the final product, but in the way the product itself is planned, built, and continuously improved.