

The Excel Sheet Shall: Why Engineering Teams Keep Coming Back to Spreadsheets
Engineering
/
/
May 8, 2026
Engineers who write requirements for flight computers, autonomous vehicles, and spacecraft headed to Mars are often managing those requirements with a process that has no requirements of its own.
If you wrote them down, they might read like this:
The requirements traceability matrix shall be stored in a shared Excel file. The responsible engineer shall manually update all downstream documents upon requirement change. All paragraph references shall be verified by hand after any document revision. Version control shall be achieved by appending _v2_FINAL_REVISED to the filename.Most teams haven't engineered how they manage their own requirements. Not because they don't see the problem. Because they've tried to fix it, and what they found was worse.
The tool graveyard
DOORS, JAMA, Polarion, Codebeamer. These teams know the purpose-built platforms exist. Many have evaluated them. Some have deployed them. The verdict lands in roughly the same place every time: too expensive, too rigid, too slow to configure, and locked in.
One engineer, who eventually built his own toolchain from scratch, put it this way: "I don't want to be locked in to the point where the only thing I can get out of a requirements tool is another PDF."
The platforms built to solve the requirements problem have, for a wide range of engineering teams, become their own problem. So teams look elsewhere. The first place they look is the stack they already have.
Pushing the existing stack to its limits
Before anyone approves a new tool purchase, there's a phase where the team tries to make what they already have work. They have Confluence. They have Jira. They have Notion or Airtable or whatever management approved two years ago.
In Confluence, this looks like page hierarchies that mirror a requirements structure: system requirements on one page, subsystem requirements on child pages, test cases somewhere else. Tables inside pages with columns for requirement IDs and compliance status. Manual links between everything. And then someone discovers Requirement Yogi, a Confluence plugin built specifically to layer requirements management on top of a wiki: traceability links between pages, coverage reports, suspect flags when something upstream changes. Teams adopt it with real hope. For a while it delivers. But it's still an accessory bolted onto a documentation tool. The requirements live in Confluence pages. The data is prose with metadata attached. Teams push Requirement Yogi to its absolute limit and eventually hit the same wall: the underlying system wasn't built to be a graph, and no plugin changes that.
Jira takes a different path to the same dead end. Custom issue types become requirement types. Epics are stakeholder needs. Stories are system requirements. Sub-tasks are test cases. It almost maps, until you need to trace a component-level requirement back to the customer need it satisfies, and that relationship is five link hops through issue types Jira was never designed to connect. The query to find everything impacted by a single upstream change doesn't exist. You get a list of linked issues and start reading.
Notion databases and Airtable grids attract teams that want something more structured than a document and more flexible than Jira. Some build genuinely impressive setups: relational databases approximating a traceability matrix, filtered views surfacing open items by subsystem or owner. Then the product gets complex enough that the relationships stop fitting in a flat grid, and someone has to decide whether to invest another month of configuration or admit they've hit the ceiling.
The pattern is the same in every case. A general-purpose tool gets pushed into a job it wasn't built for. It works long enough that the workaround becomes the process. Then it fails in ways that are hard to untangle, because it's embedded in hundreds of pages, thousands of tickets, or a Notion database that only one person fully understands. Teams don't leave these tools because the tools are bad. They leave because maintaining the workaround has become the problem it was supposed to solve.
So they cobble
The cobbled-together approach is not ignorance. It's a rational response to having been burned.
When a platform slows you down more than it helps, or when the procurement cycle outlasts the project, going back to Excel and Word is a deliberate trade: scalability problems later for velocity right now. The spreadsheet doesn't fight you. It doesn't need a three-month implementation. It doesn't lock your data in a proprietary format. It does what you tell it to.
In practice: requirements in a spreadsheet (or several). Compliance narratives in Word. Test cases in a separate document. Paragraph references manually noted and manually maintained. Change communication over email or verbal in standup. Version control by file naming convention or, for the more adventurous, Git.
At an Italian aerospace and rail supplier, it was a carefully structured Excel traceability matrix: requirement ID, compliance statement, satisfying document, validation method, test report, responsible engineer. Refined over years. It worked, within limits. Those limits surfaced when a customer updated a referenced standard to a new revision, and the team had to manually re-evaluate compliance work done years earlier, potentially from scratch, even when nothing had changed in substance. "If I would have a system to automatically trace it, I would recall the compliance that I already managed to give maybe two years ago and I don't remember." They also knew that if a paragraph number shifted in a technical specification, every test handbook reference tied to it broke silently. No signal until someone went looking.
At a clean energy startup managing requirements across system, subsystem, and component levels, the situation was disparate spreadsheets and verbal communication as the feedback loop. A systems engineer described the workflow: translating system-level requirements into architecture, decomposing into subsystems, cascading down to component parameters. "And all that gets, right now, documented in spreadsheets." When a design review reveals that a requirement can't be met, that change needs to propagate back up through the entire chain. In a spreadsheet system, it propagates through whoever remembers to update it.
At a space computing company shipping hardware to spacecraft headed to Mars and the moon, the engineering lead had decided neither the spreadsheet model nor the available platforms were acceptable. He'd started building from scratch: structured text documents, custom Python scripts to import requirements from whatever format a customer sent, JSON extraction, Git version control, Robot Framework test output annotated with requirement IDs. An enormous engineering effort applied to the problem of managing engineering work. "One of the wonderful things about digital systems is that the tool doesn't have to be perfect... if it can save me time and money and I don't have to maintain another tool." The irony: he was maintaining exactly that. A whole suite of tools, because the alternatives were worse.
This is where a lot of technically sophisticated teams end up. Not in DOORS. Not in JAMA. In a Git repository with YAML files and a Python script that someone will eventually need to understand.
The cost of the cobble
The cobbled approach works until it doesn't. The failure modes are predictable.
Change propagation is invisible. When a requirement changes, or when a customer issues a new revision of a referenced standard, nothing downstream flags itself as affected. Engineers have to know what was connected, remember the trace structure that lives in their heads, and manually inspect everything that might be impacted. Often the customer catches it first: "We are notified by our customers, which use DOORS, and this makes us lose a lot of time reviewing and finding the guilty document or the guilty reference."
The bid phase becomes rework. When a customer changes the revision of a cited standard, teams redo compliance work from years earlier. Not because anything changed in substance, but because there's no way to retrieve the original reasoning and compare it against what's new. The default is starting over.
Copies multiply and drift. A new flight test, a new product variant, a new customer deliverable: each gets its own copy of the spreadsheet. Fifteen tests means fifteen copies. A change to a base requirement has to be applied fifteen times, by whoever remembers. "Uh-oh, now we've got to change 15 different things in 15 places." The authoritative version is whichever copy was updated most recently, and that's anyone's guess.
The data is structurally invisible. A flat spreadsheet and a Word document give an AI system almost nothing to structurally reason about. There's no graph, no typed relationship, no traversable connection between a stakeholder need and the test case that validates it. AI can summarize and search this data. It can't reason about it: not about coverage gaps, not about impact, not about what a change propagates to. That capability requires structure that doesn't exist in a spreadsheet.
From tables to graphs
The teams furthest along in rethinking this have landed on the same conclusion: requirements, at scale, are not a table. They're a graph. And a graph needs to be managed as a graph.
That means requirements as structured, connected, versioned, queryable data. Traces that update when something upstream changes. Change impact that surfaces immediately, not after someone goes looking. A single source of truth that multiple teams and deliverables can reference without each maintaining their own copy.
This is the shift Trace.Space was built for. Not from Excel to another platform that also produces PDFs. From static documents to a living, connected system where complexity becomes an asset you can work with, not a liability you manage around. Trace.Space traces every chain across people, processes, technology, and product. When something changes upstream, the impact is visible immediately. When complexity grows (McKinsey measures embedded software complexity in automotive alone growing 17% per year, with overall project complexity up 300% in the past decade), the system gets more useful rather than more fragile.
The teams making this move aren't doing it because someone sold them a vision. They've already done the math on what the current approach costs, and they've stopped pretending the spreadsheet is going to scale.
See it live
On May 28th, Matthew Maclaine will walk through exactly what this transition looks like: taking scattered documents and disconnected spreadsheets and moving them into a connected requirements graph. He'll show what becomes possible once that foundation exists: real-time impact analysis, AI-assisted gap detection, and a traceability picture that stays accurate without anyone maintaining it by hand.
If your current requirements process has more undocumented assumptions than your actual product does, this one is worth an hour of your time.
Table of contents
Join our webinar on May 28th
If your current requirements process has more undocumented assumptions than your actual product does, this one is worth an hour of your time.

Built for Enterprise Engineering Teams
Want to see how Trace.SpaceFits into your stack?Want to see how Trace.Space fits into your stack?
Our team will walk you through enterprise use cases, integrations, and deployment options, tailored to your environment.



