Episode 67 — Sustain program performance by managing change, exceptions, and technical drift
In this episode, we’re going to talk about something that quietly determines whether a privacy program stays strong over time or slowly falls apart, and that is how well it handles change, exceptions, and technical drift. When you first learn privacy program concepts, it can feel like once you set policies, run assessments, and put controls in place, the program should stay stable. Real organizations do not stay stable, even when they want to, because new features get built, teams reorganize, vendors change their services, and systems evolve in ways that are hard to notice day by day. If the program does not have a reliable way to manage that constant motion, performance slowly degrades until deadlines are missed, promises to individuals are broken, and leadership loses confidence. Sustaining performance is not about making the program rigid, because rigidity creates workarounds and resentment. It is about building a disciplined way to absorb change without losing control of privacy outcomes. The three ideas in the title are tightly connected: change is what pushes on the program, exceptions are how the program flexes without breaking, and technical drift is what happens when reality gradually diverges from what you think is true. By the end, you should be able to explain how these forces show up in everyday operations and how a privacy program can stay reliable through them.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A good starting point is to understand that program performance is not only measured by whether policies exist, but by whether the organization consistently produces the outcomes those policies are supposed to guarantee. Outcomes include things like honoring data minimization, responding to rights requests on time, keeping retention aligned to purpose, and preventing inappropriate sharing. Those outcomes depend on many moving parts, including people, processes, and technology, and each part changes over time. Change can be planned, like a product launch, or unplanned, like a vendor outage that forces a workaround. Exceptions are the cases where the normal rule cannot be followed for a specific reason, and they are inevitable in any complex environment. Technical drift is the gradual mismatch between what the program believes is happening and what is actually happening, often caused by small changes that nobody documents. If you ignore these forces, performance drops quietly until there is a visible failure, and by then, it is harder to fix. If you manage them deliberately, the program stays stable even while the business moves fast. The key is to treat sustaining performance as an operational discipline, not as a one-time project.
Change management in a privacy program is about noticing changes early and evaluating whether they alter privacy risk, obligations, or controls. Many organizations already have change processes for technology and operations, but privacy needs to be linked into them in a way that is practical. A change can be a new data field collected in a form, a new integration between systems, a new use of analytics, a new vendor, or a new purpose for existing data. It can also be a change in who has access, where data is stored, or how long it is retained. The privacy program’s job is not to stop change, but to ensure change does not quietly break privacy commitments. That usually means having triggers that route certain kinds of changes into review, such as changes involving sensitive data, large scale processing, new sharing, or new cross-border transfers. For beginners, the most important idea is that change itself is not the problem, and unmanaged change is the problem. When change is managed, you can adapt controls before harm occurs rather than scrambling after the fact.
To manage change well, you need a consistent way to ask, what changed, why did it change, and what privacy outcomes could this affect. These questions sound simple, but they can reveal a lot. What changed might include new data types, new flows, new storage locations, or new user-facing behaviors like notice wording and choices. Why it changed might reveal whether the purpose is expanding, which matters for lawful processing and purpose limitation. What outcomes it could affect connects the change to practical obligations, like rights handling, retention, access control, and transparency. A change that touches consent or preferences affects whether users’ choices are honored. A change that increases data sharing affects accountability and third-party obligations. A change that introduces profiling affects fairness and expectations. The privacy program needs a habit of translating change into risk and obligation impact in plain language. This translation is what allows teams to understand why the review matters and what information is needed. If the program cannot translate change into consequences, people will see reviews as arbitrary.
Exceptions are the next part, and they are one of the easiest places for performance to collapse if the program does not handle them carefully. An exception is a deliberate decision to deviate from a standard rule or control for a specific reason, usually for a specific time, and with specific compensating measures. The idea is not to ignore the rule, but to acknowledge that the normal approach is not possible or not reasonable in that case. Exceptions can be necessary for business continuity, for technical constraints, or for unusual scenarios that the standard policy did not anticipate. The danger is that exceptions can turn into permanent loopholes if they are not managed, tracked, and reviewed. Beginners sometimes think exceptions are failures, but a mature program treats exceptions as a normal part of governance, as long as they are controlled. The goal is to allow flexibility without losing accountability. When exceptions are handled well, they reduce risky workarounds because teams have a legitimate path to request and manage deviations.
A strong exception process usually includes a clear description of the requested deviation, the reason it is needed, the risks created by deviating, and the compensating controls that will reduce those risks. Compensating controls are alternative safeguards, like stricter access monitoring, shorter retention elsewhere, additional review steps, or added transparency, depending on the issue. Exceptions should also have a defined owner and a defined expiration or review date, because without time limits, an exception becomes the new normal. For example, a team might request an exception to retention policy because a legacy system cannot delete properly yet, but the exception should require a plan to fix deletion and a timeline for completion. If the program grants exceptions without requiring remediation, it is quietly accepting risk without reducing it. Another important exception concept is consistency and fairness, because if exceptions are granted unpredictably, teams will see the process as political. Consistent criteria and documentation help ensure exceptions are about real constraints, not about who argued the loudest. When exceptions are managed like accountable decisions, they can actually strengthen program performance because they reveal where controls need redesign.
Technical drift is the third piece, and it is often the most dangerous because it is subtle. Technical drift happens when systems and configurations evolve over time and the privacy program’s understanding does not keep up. It can happen when new services are added, when default settings change after updates, when logging changes, when access roles expand, or when data flows are altered through integrations. Drift also happens when documentation becomes outdated, like data inventories that no longer match reality or retention rules that are documented but not implemented consistently. The result is that the program believes controls exist, but in reality those controls may be partially broken or bypassed. Drift can also occur on the vendor side, where a contractor changes sub-processors, expands support access, or shifts data locations, and the customer organization does not notice. Technical drift is not always malicious, and it is often the natural result of busy teams making small adjustments over time. That is why combating drift requires systematic monitoring and periodic verification, not just strong policies. When you manage drift, you are essentially keeping the program’s mental model aligned with reality.
To detect drift, privacy programs rely on signals and checks that confirm whether controls are operating as intended. These checks can include periodic reviews of access patterns, sampling of retention and deletion outcomes, validation that data inventories still match what systems contain, and review of vendor change notices and performance evidence. Drift detection often focuses on the high-risk areas, such as systems holding sensitive data, systems with many users, and systems that change frequently. Another drift signal is repeated exceptions, because if many teams request the same exception, it might indicate that the standard control does not fit reality or that a system limitation is widespread. Similarly, repeated issues in rights request fulfillment can indicate drift in data mapping, because if teams cannot locate data reliably, the underlying inventory may be outdated. Drift can also be seen in time, like increasing delays, increasing rework, and increasing confusion, because those are often symptoms of processes no longer matching system reality. The key is to treat drift as an expected problem, not as a surprising failure. If you expect drift, you build routines to catch it, and those routines sustain performance.
Managing change, exceptions, and drift also requires good recordkeeping, because performance sustainability depends on being able to see what decisions were made and why. A record of change review decisions helps you understand why a design was approved and what safeguards were expected. A record of exceptions helps you see where risk is being accepted temporarily and whether it is actually temporary. A record of drift findings and remediation helps you ensure the program is improving rather than repeating the same issues. Good records also support accountability, because they make it clear who owned a decision, who was responsible for implementing controls, and whether the follow-through happened. For beginners, recordkeeping can sound like bureaucracy, but the real value is memory. Organizations forget, teams change, and people leave, and without records, the program cannot learn. Records allow the program to compare what was planned to what actually happened and to correct course. This is what turns privacy management into a durable discipline rather than a personality-driven effort.
Sustaining performance also means managing the human side of change and exceptions, because people will always look for the easiest way to get work done. If privacy processes feel unpredictable or slow, teams will route around them, and that creates unmanaged risk. If privacy processes feel clear, fair, and helpful, teams will bring changes forward earlier and ask for exceptions openly rather than hiding them. This is why communication matters, especially around what kinds of changes trigger review and what information is needed to make decisions quickly. It also matters how the program responds to exceptions, because if exceptions are always denied without explanation, teams will stop asking. If exceptions are always granted without discipline, teams will treat them as standard. The program should aim to be consistent and transparent in how it handles these requests, which builds trust and reduces conflict. Another important human factor is training, not as a one-time event, but as ongoing reinforcement of what good privacy behavior looks like during change. When people understand the why behind controls, they are more likely to maintain them during busy periods.
A privacy program that sustains performance also uses metrics and monitoring to prioritize effort, because not all change deserves the same scrutiny. Some changes are low impact, like updating text that does not alter processing, while others are high impact, like introducing new profiling or new sharing. Exceptions vary too, where some are low risk and short-lived, and others are high risk and persistent. Drift can be minor, like small documentation lag, or major, like access expanding beyond intended limits. The program needs a way to classify these and focus on the risks that matter most. This is where risk-based thinking becomes practical, because it prevents the program from treating everything as urgent and becoming overwhelmed. When you prioritize, you also protect the program’s credibility, because teams see that reviews are targeted rather than arbitrary. A program that tries to review everything deeply often ends up reviewing nothing well. A program that reviews the right things well can sustain performance with limited resources.
To bring all of this together, you can think of sustaining program performance as keeping three loops healthy: the change loop, the exception loop, and the drift loop. The change loop notices planned and unplanned changes, evaluates impact on privacy outcomes, and adjusts controls before harm occurs. The exception loop provides a disciplined way to deviate from standards temporarily, with documented risk, compensating controls, ownership, and expiration. The drift loop checks whether reality still matches the program’s assumptions, detects mismatches, and tracks remediation until alignment is restored. These loops support each other, because change creates the need for exceptions, exceptions can reveal drift, and drift can drive new controls and better change triggers. When these loops are weak, the program becomes reactive and brittle. When they are strong, the program becomes steady and adaptable, which is the best combination you can have in a fast-moving organization. The loops also produce evidence, which helps prove the program is managing privacy responsibly over time.
As a final takeaway, sustaining privacy program performance is less about writing perfect policies and more about managing the messiness of real life. Change will happen whether the program likes it or not, so the program must connect to change decisions early and consistently. Exceptions will be needed, so the program must manage them in a way that prevents loopholes and drives remediation. Technical drift will occur, so the program must detect it through practical checks and restore alignment between documentation and reality. When you build these habits, you reduce surprises, you reduce emergency work, and you increase the likelihood that privacy promises remain true as systems evolve. For brand-new learners, the most important mindset shift is that privacy management is ongoing operations, not a one-time setup. A strong program is not the one that never faces exceptions or drift. A strong program is the one that expects them, manages them, and learns from them so performance stays reliable year after year.