Last reviewed: 2026-04-11. Educational content only, not tax or legal advice. Confirm eligibility with qualified UAE counsel and tax advisers.
UAE · Frascati · Evidence design
Frascati-aligned technical records: how to make UAE R&D documentation audit-ready
The UAE R&D credit does not define qualifying research in its own vocabulary. Instead, it adopts the internationally recognised framework of the OECD Frascati Manual, the same set of principles that underlies Canada's SR&ED programme, the UK's R&D relief regime, and most other national R&D tax incentives. If your engineering team has gone through a CRA technical review or built a Form 6765 support file, you already understand the logic. The vocabulary shifts slightly; the underlying discipline of capturing what you tried, what you did not know, and what you learned does not.
1. What the Frascati Manual actually is
First published in 1963 and periodically updated by the OECD, the Frascati Manual is the international standard for measuring research and experimental development. It defines R&D as comprising creative and systematic work undertaken in order to increase the stock of knowledge, including knowledge of humankind, culture and society, and to devise new applications of available knowledge. The definition deliberately excludes routine activities, quality control, standard testing, and work whose outcome is not genuinely uncertain.
Tax regulators that adopt Frascati as their eligibility standard, including the UAE, are essentially asking: did you do something that adds to the stock of scientific or technical knowledge, and can you prove it through contemporaneous records? The question sounds abstract but becomes very concrete in a review: show me the hypothesis, show me the experiment, show me what you measured, show me what failed, show me what you learned.
2. The five criteria, in plain engineering terms
The Frascati Manual identifies five characteristics that qualifying R&D must exhibit. Understanding each one as an engineering question, not a legal concept, makes documentation much more natural.
Novel
Your project must advance beyond what is already publicly known in the relevant field. This does not mean your work needs to be academic research or publishable. It means that, at the time you started, the specific technical problem you were solving did not have an established, documented answer that a competent engineer in your field could simply apply. A novel system architecture is one where the combination of constraints, performance targets, data characteristics, cost envelope, regulatory requirements, meant that existing documented approaches were genuinely insufficient, not merely inconvenient.
A useful test: could a senior engineer in your domain, given time to research, have designed your solution from existing public knowledge without running your experiments? If yes, the work is likely not novel in the Frascati sense, even if it was difficult to implement.
Creative
The creative criterion is closely related to novelty but emphasises that the work involved original thinking, not just diligent application of known methods. In engineering contexts, this tends to show up in the decisions that were made about how to frame the problem. A team that identified a genuinely new way to decompose a performance bottleneck, a framing that was not in the literature, satisfies the creative criterion more clearly than one that systematically tried every known technique.
In practice, creativity in a technical file is evidenced by design documents that show alternative framings were considered and rejected, not just alternative implementations. Architecture decision records that explain why the problem was approached as it was, rather than just what was built, are strong creative evidence.
Uncertain
This is the criterion that trips up the most documentation efforts, and it is also the one that distinguishes genuine R&D from product development that happens to be difficult. Technological uncertainty means that, at the outset, it was not knowable in advance, even by an expert, whether the objective was achievable, how it could be achieved, or how long it would take. It is not project risk. It is not market uncertainty. It is a specific kind of epistemic gap: the outcome of the investigation is not determinable from existing knowledge.
The strongest documentation of uncertainty is contemporaneous. A ticket created at the start of a project that says "we do not know if approach X will meet the 50ms latency target under production load" is better evidence of uncertainty than a narrative written at year-end claiming the team faced uncertainty throughout. Reviewers understand retroactive storytelling, and Frascati documentation built after the fact tends to look like exactly that.
Failure is one of the best proxies for uncertainty. If your team's records show paths tried and abandoned, with measurements that motivated the abandonment, those records demonstrate that the outcome was genuinely uncertain. A file with no documented failures often signals either that the work was not genuinely uncertain, or that the uncertainty was real but was not captured.
Systematic
Systematic investigation means the work was conducted according to a plan, with hypotheses, controlled variables, and measurement. It does not need to follow a formal scientific protocol. Engineering teams rarely run randomised controlled trials, but the work still needs to be distinguishable from informed tinkering. The systematic criterion is what separates R&D from talented problem-solving.
In practice, systematic investigation leaves traces. A sprint board that shows epics structured around specific technical questions, with acceptance criteria defined in terms of measurable outcomes, is systematic. A standup history showing three weeks of debugging followed by a solution is not, on its own, systematic, even if the underlying work was excellent. The difference is the documented framework within which the debugging occurred.
CRA reviewers applying SR&ED criteria look for the same thing they call the experimental loop: hypothesis, build, measure, compare against prediction, update the model. The UAE file should be able to answer the same implicit question without requiring the engineering team to fabricate structure that was not there.
Transferable or reproducible
The fifth criterion asks whether the knowledge created by the R&D is, in principle, transferable or reproducible. This does not mean the results must be published or shared. It means the work produced genuine understanding, not just a working product, and that understanding is captured in a form that another qualified engineer could use to understand what was done and why it worked. Internal reproducibility is sufficient; external dissemination is not required.
Practically, this criterion is about documentation depth. If your team built a novel inference pipeline and the only record is the code, the knowledge created by the project exists in one engineer's head. If the team also wrote postmortems, documented the failed approaches, and recorded why the final design beats the alternatives on the specific metrics that matter, that knowledge is transferable.
3. Translating Frascati into your existing engineering workflows
The most common mistake in preparing Frascati-aligned documentation is treating it as a separate deliverable: a set of documents produced specifically for the tax claim, after the fact, by people who were not doing the engineering. That approach is both expensive and fragile. It is expensive because it requires significant effort to reconstruct a narrative from incomplete records. It is fragile because reviewers are experienced at identifying retrospective construction: the dates do not quite line up, the specificity is suspiciously uneven, and the failure analysis is suspiciously sparse.
The better approach is to build Frascati capture into the workflows the engineering team already uses. Most mature engineering teams already produce the raw material of a strong Frascati file: tickets with problem statements, architectural decision records, test results, postmortems, sprint reviews, design documents. The gap is usually that this material is not structured to answer the specific Frascati questions, is not linked to pre-approved project codes, and is not retained in a form that survives the seven-year documentation window required under UAE rules.
Concretely: when a new sprint epic is created for a project that is likely to be R&D-eligible, the description should capture the technical unknowns at that point in time, not just the feature goal. When an architectural decision record is written, it should explicitly address why the alternatives were insufficient given the constraints, not just which alternative was chosen. When a performance investigation closes, the postmortem should record what the team now knows that it did not know at the start. None of this requires additional tools; it requires a slightly different convention for using the tools the team already has.
4. An illustration: three months of API latency work
A fictional gateway team spends a quarter testing three different async processing architectures to meet a 30ms p99 latency target under Gulf-region peak load conditions. Two architectures fail under realistic load tests; the third meets the target with adjustments.
A weak file for this project is a slide deck titled "Performance Initiative Q2" with a green checkmark, some benchmark graphs, and a bullet listing the chosen architecture. It does not document what was uncertain at the start, why the first two approaches failed, or what the team learned that it did not know in January. A reviewer looking at this file cannot determine whether the work was R&D or just careful engineering against known techniques.
A strong file contains Jira epics structured around each architecture, with the hypothesis for each approach stated at creation time. It contains Grafana dashboards archived per sprint, showing how measurements evolved. It contains load-test configurations in Git, tagged to release branches, so results are reproducible. It contains postmortems for each failed architecture, brief, honest, specific about what the measurement showed and what that implies about the design space. It contains a one-page Frascati narrative that connects those artifacts to the criteria: what was uncertain, what was systematic about the investigation, what knowledge was created that was not available at the start.
AutoDoc's integrations exist to keep that strong file continuous, capturing the artifacts as engineering work proceeds rather than reconstructing them at year-end. The same principle applies to SR&ED files in Canada, where contemporaneous evidence consistently outperforms retrospective reconstruction in CRA technical reviews.
5. Common documentation mistakes, and how to avoid them
The most frequent gap in UAE R&D files is the absence of documented failure. Teams are understandably reluctant to write up dead ends; it can feel like admitting wasted effort. But from a Frascati perspective, failure is the clearest evidence of genuine uncertainty. A project that succeeded on the first attempt, with no documented pivot, is harder to frame as R&D than one with three documented approaches and clear reasons why the first two were abandoned.
A second common gap is conflation of R&D and product development at the project level. Many engineering initiatives contain both: a genuinely novel component where the technical outcome was uncertain, and a larger implementation effort where the approach was known and the work was execution. Mixing these together, including all project costs in the R&D claim, is a significant risk. The file needs to clearly identify which sub-activities were R&D, with specific allocation reasoning, rather than treating the project as homogeneously qualifying.
A third gap is the absence of a clear baseline. The novelty criterion requires that the work advances beyond the current state of the art. To demonstrate this, the file needs to articulate what the state of the art was at the start of the project, not just what the team built. Architecture decision records that begin with a survey of existing public approaches, and explain why those approaches were insufficient, are strong novelty evidence precisely because they establish the baseline.
6. Seven-year retention: what to store and where
UAE Corporate Tax rules require technical and financial records to be maintained for seven years. For an engineering team, this means the artifacts that form the R&D file (tickets, commits, test results, design documents, postmortems) need to survive staff turnover, tooling migrations, and the ordinary entropy of a fast-moving organisation.
Practically, this means immutable archives of CI artifacts for relevant projects, exports of ticket history from project management tools (including ticket descriptions as they were written, not just their final state), versioned design documents with edit history, and time-allocation records linking specific employees to specific projects and activities. If your data retention policy currently allows engineers to delete repositories after two years, or project management data to be purged when a project closes, that policy needs to be revised before you make a credit claim that depends on those records.
Cloud-based tooling creates both an opportunity and a risk here. The opportunity is that audit trails exist natively, commit histories, ticket timestamps, CI logs are all timestamped and often stored redundantly. The risk is that they exist in vendor systems that may change, or in formats that are difficult to export comprehensively years later. A proactive approach to archiving, regular exports in portable formats, stored in a location your organisation controls, protects against both vendor changes and accidental deletion.
7. Pre-filing checklist
Before treating any project's expenditure as R&D-eligible for UAE Corporate Tax purposes, the file should be able to answer the following questions from contemporaneous records, not from recollection or reconstruction:
- What were the specific technological unknowns at the project's start date, stated in writing before results were known?
- What alternatives were considered, and why were they rejected based on evidence?
- What experiments or investigations were conducted, and what did each one measure?
- Which approaches failed, and what did those failures demonstrate about the design space?
- Which personnel contributed to the R&D activities, and can their time allocation to those activities be traced from payroll and timesheet records?
- Can the costs claimed be traced from financial records to specific R&D activities?
- Has the project received UAE R&D Council pre-approval, and is that correspondence in the file?
A "no" to any of these questions before filing is a signal to either strengthen the file or revisit the scope of what is being claimed, not to attempt to reconstruct the evidence after the fact.
Related reading
AutoDoc provides software and educational content, not a determination of eligibility or compliance advice. Confirm your projects' qualification status with qualified UAE tax counsel.