The four-hour clock starts before you understand the incident: notes on DORA Article 17 in practice
Field notes from regulated-industry infrastructure work — on why incident classification is harder than the regulation makes it look, and why the tooling you choose to help is itself a compliance decision.
There is a specific moment, somewhere around 02:00, that DORA Article 17 is really about. An alert has fired. Something is degraded — maybe a payment gateway, maybe an integration layer, maybe a database that three other services lean on. The on-call engineer knows something is wrong. What nobody in the room knows yet is whether this is a "major incident" under the Digital Operational Resilience Act, and therefore whether a regulatory clock has already started ticking.
That ambiguity — the gap between "we have a problem" and "we have a reportable problem" — is where most of the real difficulty of DORA incident reporting lives. I want to walk through why, because in the regulated-industry infrastructure work I do, this is the part teams consistently underestimate.
What Article 17 actually asks of you
DORA's incident-management requirements sound procedural when you read them: establish a process to monitor, log, and classify ICT-related incidents; determine which ones are major; report those to your competent authority on a defined timeline. Article 18 and the Regulatory Technical Standards then give you the classification criteria.
The criteria themselves are not vague. The RTS lays out the dimensions you assess: the number of clients and financial counterparts affected, the amount of data lost, the reputational impact, the duration and service downtime, the geographical spread across member states, the economic impact, and the criticality of the services hit. There are thresholds. There is materiality guidance. On paper, it is a structured, multi-criteria evaluation.
The problem is not the criteria. The problem is applying them to a live incident, with partial information, while the incident is still unfolding — and doing it consistently enough that two different people on two different nights would reach the same answer.
Why classification is the hard part
Three things make Article 17 classification genuinely difficult in practice, and none of them are about not knowing the regulation.
The clock starts before the picture is complete. The reporting timeline for a major incident is tight — an initial notification measured in hours, not days. But incident comprehension does not move at the same speed as the regulatory clock. You often establish that an incident is major before you fully understand its scope. That means classification is not a one-time decision you make once you have the facts; it is a rolling judgment you revise as the facts arrive, and every revision has reporting consequences.
The criteria require correlation, not just observation. "Number of clients affected" is not a number that sits in a dashboard. It is something you derive by correlating a service degradation against customer-impact data, session logs, transaction volumes, and downstream dependencies. "Geographical spread" means knowing which member states the affected service actually touches. The raw telemetry tells you a Tomcat node is throwing errors; it does not tell you that this maps to a reportable threshold. Someone — or something — has to do that translation, and the translation is where inconsistency creeps in.
It is a medium-knowledge task, which is the worst kind to automate badly. If incident classification were trivial, you would not need expertise. If it were purely expert judgment, you would not be tempted to systematize it. It sits in the uncomfortable middle: structured enough that you want to support it with tooling, judgment-heavy enough that naive automation produces confident, wrong answers. And confident-wrong is the dangerous failure mode, because the human reviewing the output is under the same 02:00 time pressure and is inclined to accept a plausible-looking classification rather than relitigate it.
That last point deserves emphasis. The instinct is to reach for automation to make the four-hour clock survivable. That instinct is correct. But automation that produces a classification without showing its reasoning, without citing which criteria it evaluated and which evidence it used, does not actually reduce your risk — it relocates it. You have traded "we might classify inconsistently" for "we might rubber-stamp a machine's inconsistency." The asymmetry of consequences is brutal here: a missed major-incident report is a supervisory problem with real financial penalties attached; a false positive is a wasted hour. Tooling that does not respect that asymmetry is not helping.
The reporting chain is not just the first notification
One more thing that gets underestimated: Article 17 is not a single event. A major incident generates an initial notification, then intermediate reports as the situation develops, then a final report with root cause. Each of those is a structured submission to a national competent authority — and "structured" is doing a lot of work in that sentence. Your output has to match what the authority expects to receive.
This means the back half of incident reporting is fundamentally a data-shape problem. You are taking a messy, evolving operational reality and rendering it into a specific schema, repeatedly, under time pressure, with an audit trail that has to survive later scrutiny. The audit trail matters as much as the report: when a supervisor or an auditor reviews the incident months later, "we decided it was major" is not an answer. "We decided it was major because these specific criteria crossed these specific thresholds, evidenced by these specific records, at this specific time" is an answer. If your process cannot reconstruct the why, you do not really have a compliant process — you have a compliant-looking one.
The part nobody puts on the architecture diagram: your tooling is in scope too
Here is where I want to make the turn, because this is the observation that I think is most often missed.
If you bring in tooling to help with Article 17 — a platform, a service, an AI-assisted classifier, anything — that tooling is now an ICT third-party service supporting what is very plausibly a critical or important function. Which means it falls inside DORA's third-party risk regime. Which means it goes in your Register of Information. Which means its own subcontracting chain, its own data flows, its own jurisdictional exposure are now your concern, assessed under Article 28 and the subcontracting RTS.
This produces a slightly uncomfortable recursion: the tool you adopt to help you comply with DORA is itself a DORA compliance surface. And it is one with a sharp edge, because a lot of modern compliance tooling is AI-assisted, and a lot of AI-assisted tooling routes inference through large US cloud providers.
That is not automatically disqualifying. But it is automatically assessable, and the assessment has gotten harder. Over the past year the question of whether EU data residency is the same thing as EU sovereignty has been pulled apart in public — most visibly when a major US cloud provider's own legal representatives told a national parliament, under questioning, that they could not guarantee EU-hosted data would never be compelled by non-EU authorities. Physical data location and legal jurisdiction turned out to be two different promises. For most workloads that distinction is academic. For an incident-classification system processing the operational guts of a regulated financial entity, it is not — that is precisely the data a concentration-risk and third-party-exposure assessment is supposed to scrutinize.
So if you are evaluating anything that touches Article 17, the questions worth asking early:
- Where does the actual processing happen, and who legally controls the entity doing it? Not "where is the data stored" — who can be compelled to hand it over, by whom, under what law.
- What is the subcontracting chain? If the tool uses a model provider, and the model provider uses a cloud provider, you have a fourth party. DORA expects you to see down that chain.
- Can it show its work? A classification you cannot audit is a liability dressed as an efficiency.
- What happens when you want to leave? Exit and substitutability are explicit DORA concerns. "We can switch this off and stand up an alternative" should be a real answer, not a hopeful one.
- Does the architecture respect the consequence asymmetry? Does it treat a possible missed major-incident the way the regulation does — as the expensive error — or does it optimize for looking smooth?
None of these are exotic. They are the same third-party-risk questions DORA already asks you to apply to every other ICT provider. The point is just that compliance tooling does not get a pass on them — and the AI-assisted kind, specifically, needs the jurisdictional question asked out loud rather than assumed away.
Where this leaves you
DORA Article 17 is, underneath the procedural language, a regulation about making good classification decisions fast and being able to defend them later. The four-hour clock is the forcing function; the RTS criteria are the rubric; the Register of Information is the receipt.
The teams that handle it well are not the ones with the most tooling. They are the ones who understood early that classification is a rolling judgment under uncertainty, that the audit trail is as important as the report, and that every tool they bring in to help is also a tool they are now accountable for. The teams that struggle are the ones who treated incident reporting as a form-filling exercise and discovered, at 02:00, that the form was the easy part.
If you are building or buying in this space right now, my one piece of advice: spend as much time on how the decision gets made and evidenced as you do on how the report gets filed. The filing is mechanical. The decision is the regulation.
These are general observations from infrastructure and compliance work in regulated industries — banking, payments, and adjacent fintech — and are not legal advice. DORA implementation specifics should be worked through with your own compliance and legal teams against the current text of the regulation and its Regulatory Technical Standards.