Episode 40 — Perform gap analysis against laws, regulations, and accepted standards
In this episode, we’re going to bring a lot of the program pieces together by focusing on gap analysis, which is the disciplined way you figure out what you’re doing today, what you’re supposed to be doing, and what needs to change to close the distance. Gap analysis is not a vague feeling that we should improve; it is a structured comparison between your current privacy program and a set of expectations, such as laws, regulations, and accepted standards. Beginners sometimes think gap analysis is a one-time exercise you do at the beginning of a program, yet in practice it is a repeatable method you use whenever the environment changes, like when you expand into a new jurisdiction, adopt a new vendor, or add new uses of data. Gap analysis is also one of the most practical leadership tools in privacy management because it translates complex obligations into prioritized work, showing what is urgent, what is high risk, and what can be staged over time. When you learn to perform gap analysis well, you stop relying on assumptions and you start managing privacy like an operational system.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A gap analysis begins with choosing the reference set, meaning the laws, regulations, and standards you are comparing against, because a gap analysis without a clear reference is just opinion. Laws and regulations include legal obligations that apply to your organization based on where individuals are, what you do, and what kinds of data you process. Accepted standards are frameworks and practices that are not always legally mandatory but are widely used to demonstrate maturity and defensibility, such as the International Organization for Standardization (I S O) privacy-related standards, System and Organization Controls (S O C) reporting expectations, and other recognized governance approaches. The point of using standards is not to decorate your program with acronyms, but to create a structured set of control expectations that can guide consistent improvement. It is also important to recognize that standards differ in focus, with some emphasizing governance and management systems while others emphasize specific controls and evidence. A mature gap analysis may include multiple reference sets, but it should still be clear about which gaps relate to legal requirements and which relate to chosen maturity targets. That clarity helps leaders prioritize and helps the organization explain its decisions.
Once the reference set is chosen, the next step is defining scope for the gap analysis, because scope determines what you are evaluating and how deep you will go. Scope should specify which data contexts are included, such as customer data, employee data, or marketing data, because different obligations may apply in different contexts. Scope should also specify which business units, products, or regions are included, because privacy practices often vary across the organization, and a gap analysis that assumes everything is uniform will miss important differences. Time scope matters too, because you may be evaluating current operations as they exist today, or you may be evaluating planned changes, such as a new product launch that will introduce new collection and sharing. Defining scope also includes identifying what is out of scope, not to hide problems but to prevent false conclusions that the analysis covered everything. This disciplined scoping helps avoid the common beginner mistake of trying to assess the entire enterprise at once and then producing vague results because the effort was too large. A well-scoped gap analysis is more actionable because it can be completed with rigor.
The next requirement is capturing your current state accurately, because you cannot identify gaps if you do not know what is actually happening. Current state is not what your policies say; it is what your systems and people do, and that means you need evidence. Evidence can include data inventories, flow maps, rights case records, incident response documentation, vendor contracts, training records, access review logs, retention schedules, and control test results. Interviews and walkthroughs are also important because they reveal how processes operate in practice and where informal workarounds exist. Beginners often assume they can perform a gap analysis from a policy binder, but that approach produces false confidence and misses operational reality. A mature program uses operational artifacts and sampling to confirm that documented procedures are actually followed. Current state work is often the most time-consuming part of gap analysis, but it is also the part that produces the most value because it reveals where the program is strong and where it is fragile.
With current state in hand, you translate the reference requirements into testable criteria that match your scope, because laws and standards are often written at a level that requires interpretation before you can compare them to operations. For example, a law might require that individuals can exercise certain rights within a timeframe, which becomes criteria like intake exists, identity verification is defined, case timelines are tracked, and fulfillment steps cover all relevant systems. A regulation might require transparency about sharing, which becomes criteria like notices describe recipient categories, inventories track vendors and roles, and changes trigger notice updates. A standard might require governance oversight, which becomes criteria like roles are defined, audits occur on a schedule, and corrective actions are tracked to closure. This translation step is where privacy program managers demonstrate maturity because they make requirements operational without losing meaning. The criteria should be written clearly so different reviewers would reach similar conclusions, reducing subjectivity. When criteria are clear, the analysis becomes fair and repeatable, which supports ongoing program improvement rather than one-off opinions.
The comparison step is where you map each criterion to evidence and decide whether you meet it, partially meet it, or do not meet it, and this decision must be documented with reasoning. Meeting a criterion means the required capability exists and operates effectively, supported by evidence. Partially meeting means something exists but is incomplete, inconsistent, or not fully effective, such as a rights process that exists but lacks vendor coordination or consistent verification documentation. Not meeting means the capability is missing or fails in a way that creates significant risk, such as no reliable retention enforcement in systems that hold personal data. A common beginner mistake is to label everything as partially met to avoid uncomfortable conclusions, but that undermines prioritization because leaders cannot see what is truly urgent. The better approach is to be honest and precise, using evidence to support the rating and noting where additional evidence would change the assessment. This honesty is not pessimism; it is how you build a defensible plan that actually reduces risk.
After identifying gaps, you need to assess risk and priority, because not every gap carries the same consequences and you cannot fix everything at once. Risk assessment considers potential harm to individuals, likelihood of the gap causing a failure or incident, legal exposure, and operational impact. A gap that affects core rights fulfillment or breach notification is often high priority because it directly affects external accountability and has strict expectations. A gap that affects notice clarity may also be high priority because it shapes trust and can trigger regulatory scrutiny, especially if actual practices are not well described. A gap in vendor governance can be high priority if vendors handle sensitive data or if cross-border transfers are involved. Standards-related gaps may be prioritized based on strategic goals, such as achieving a certain maturity level or meeting customer assurance expectations. Prioritization should also consider dependencies, because some fixes unlock others, such as improving inventories and flow maps making retention and rights fulfillment improvements easier. When risk and dependencies are considered, the gap analysis becomes a roadmap rather than a list of problems.
Turning gaps into a remediation plan is the step that makes gap analysis operational, because leaders do not fund gaps, they fund work. Each gap should be translated into concrete actions with owners, timelines, and required evidence of completion. Actions might include updating notices, redesigning rights intake workflows, tightening access controls, implementing retention automation, revising vendor contract templates, or building training for specific roles. The plan should distinguish between quick wins and structural work, because both matter, and quick wins can build momentum while structural changes take longer. The plan should also identify where decisions are needed, such as choosing a technical approach for deletion across systems or deciding whether to change a vendor that cannot meet requirements. A mature plan includes milestones and progress reporting so leaders can track movement and remove blockers. Without a plan, gap analysis becomes an academic exercise that produces anxiety but no improvement. With a plan, it becomes a governance tool that creates measurable progress.
Evidence expectations should be part of the remediation plan because closing a gap is not just doing work; it is proving the work had the intended effect. If you update a procedure, evidence might include training completion and sampled case records showing the new steps are followed. If you implement a control, evidence might include configuration records, logs showing the control operating, and validation tests confirming outcomes. If you renegotiate vendor terms, evidence might include executed agreements and confirmation of operational support for rights or deletion. Evidence planning also helps prevent the common pattern where teams implement a change but do not document it, making it hard to demonstrate compliance later. It also supports audit readiness because closed gaps can be retested to confirm they stay closed. When evidence is built into remediation, the program becomes self-verifying, reducing the need for emergency proof gathering when questions arise.
Gap analysis also needs to account for program sustainability, because closing gaps once is not enough if controls drift or if changes create new gaps. A mature approach therefore includes building ongoing controls into the plan, such as embedding privacy review into procurement, tying inventory updates to product changes, and implementing periodic control testing and attestations. Sustainability also includes training and awareness so people know new expectations, and it includes metrics so leaders can see whether improvements are holding. For example, if you improve rights operations, you should track whether timeliness and quality remain strong over time, not just at the moment you declare the gap closed. If you improve retention enforcement, you should measure whether data older than the retention threshold continues to decline and whether deletion jobs continue running as scheduled. Sustainability turns remediation into a stable capability rather than a temporary project. This is how gap analysis becomes a cycle that strengthens the program continuously.
An important part of gap analysis is managing differences across jurisdictions without creating chaos, because laws can overlap and conflict in details like rights scope, timing, and definitions of sharing. A common strategy is to design program controls to meet the highest common requirement where practical, then layer jurisdiction-specific adjustments where necessary. For example, you might build a rights intake process that can route requests by jurisdiction and apply different timing and response requirements based on location. You might design notices with global core disclosures and region-specific addenda that reflect local obligations. You might design vendor management to meet strong baseline processor requirements, then add cross-border transfer safeguards where needed. The gap analysis should identify where harmonization is possible and where localized control is required, because that affects complexity and cost. This is a place where program managers must balance simplicity and accuracy, ensuring controls are both workable and legally defensible. A well-executed gap analysis helps leaders understand that complexity is sometimes unavoidable, but also shows where consistent design can reduce burden.
Accepted standards play a special role in gap analysis because they help you evaluate maturity beyond minimum legal compliance, and they can support external trust even when not legally required. Standards often emphasize governance discipline, such as defined roles, documented processes, evidence-based controls, and continuous improvement cycles. By comparing your program to a standard, you can identify gaps that would not necessarily cause immediate legal failure but could lead to instability, such as lack of formalized auditing, weak corrective action tracking, or incomplete vendor oversight. Standards also provide a structured language for discussing improvements with leaders, because they frame gaps as control maturity issues rather than as abstract concerns. This can be especially useful when customers or partners expect certain assurance practices, because standards alignment becomes part of business credibility. The key is to be clear about why you are using a standard and what level of alignment you are targeting, because partial alignment can still be valuable as long as it is honest. Standards should guide improvement, not become a branding exercise.
As you close out this final episode in this sequence, the central lesson is that gap analysis is the method that turns privacy expectations into an actionable improvement plan grounded in evidence. By choosing a clear reference set of laws, regulations, and accepted standards, and by defining scope carefully, you ensure the analysis is meaningful and manageable. By capturing current state using real operational artifacts and by translating requirements into testable criteria, you avoid relying on policy language alone and you create fair, repeatable comparisons. By rating gaps honestly, assessing risk and dependencies, and converting gaps into owned remediation actions with evidence expectations, you create a roadmap leaders can govern and fund. By building sustainability through ongoing controls, metrics, and training, you prevent closed gaps from reopening as the organization changes. When performed with discipline, gap analysis becomes a cycle of continuous improvement that keeps the privacy program aligned with real operations across jurisdictions, resilient under scrutiny, and worthy of the trust people place in it when they share their data.