
May 11, 2026
In private markets, a miscalculation is not just an abstract risk. A misapplied hurdle rate, a formula that has quietly drifted from what’s written in the LPA, a model where one person's update corrupts another's result; these are real events that cost firms money, LP relationships, and years of reputational work.
In many cases, they trace back to the same architectural flaw: data and logic living in the same place.
At qashqade, we made the decision at the very beginning to keep data and logic strictly separate. It shapes everything about how our platform works. This is not merely a technical preference. It is an architectural principle that determines whether systems remain scalable, auditable, and adaptable over time.
We sat down with our CEO Oliver Freigang, to explain what this separation actually means, why it is so critical in private markets specifically, and what firms lose when those two layers collapse into one.
Data is everything that is specific to a fund, a deal, or an investor. It is capital contributions, distribution dates, commitment amounts, LP-specific preferred return rates from side letters, carry points, vesting schedules. Data changes constantly. Every capital call, every distribution, every exit is a data event.
Logic is the rules that govern how data gets processed. It is the waterfall structure, the sequence in which proceeds flow: in simple cases return of capital first, then preferred return, then catch-up, then carried interest. It is the formula for calculating a hurdle. It is the rule that says one LP has a different preferred return rate from everyone else in the fund.
The logic must be stable whereas the data should be able to change freely. When those two things are mixed together (which is exactly what happens in spreadsheet-based models) you have a system where changing one always risks breaking the other.
In many private markets environments, allocation calculations originate in Excel and often remain there in some form, even when wrapped inside broader systems. Excel is the natural habitat for this problem. In a spreadsheet, your data lives in cells and your logic lives in the formulas that reference those cells. They look separate. They are not. Change the structure of your data and the formulas that depend on that structure can silently break. Or worse: they produce a number that looks plausible but is wrong.
This is not hypothetical. We see it regularly with funds that come to us after years of spreadsheet-based operations. A team restructures how they record a transaction through a fund's life. The waterfall model continues to run. No error messages. No warnings. But the logic no longer reflects the LPA because the data structure it was built around has changed shape. The miscalculation compounds across multiple reporting cycles before anyone catches it.
The cost is not just financial. It is reputational. LPs who discover they have been impacted by allocation calculation errors, lose confidence in the GP's operational capabilities. That confidence is extraordinarily hard to rebuild.
We enforce a clean boundary between the two layers at the architecture level. Not as a feature, but as a foundational design principle.
The data layer holds all transaction-specific information: investors’ commitments and entitlements, their side letter terms, the transactions that have occurred. It is structured, governed, and auditable in its own right.
The logic layer is a separate engine. It contains the rules for how calculations should run: the waterfall sequence, hurdle mechanics, catch-up provisions, carried interest splits. This logic is defined once, tested rigorously, and applied consistently every time the engine runs.
When your data changes, for example a new LP joins, an additional close occurs, that change is made in the data layer. The logic layer does not need to be touched. It simply processes the updated data through the same proven rules.
And when your logic changes (let’s say you are launching a new fund with a different waterfall structure) that change is made in the logic layer without any risk of corrupting historical data or previous calculation results.
The two layers evolve independently. That is what makes the system trustworthy over the full life of a fund.
A fund has twenty LPs. Three of them have negotiated bespoke preferred return rates in their side letters. In a spreadsheet, the common approach is to hardcode those rates into the relevant formula cells. It works until the data structure changes, or someone copies a formula from a standard LP row to one of those three rows, or a new analyst rebuilds a tab and is unaware that those three investors have special terms.
In qashqade, the side letter terms are built into those specific LP records. The logic engine queries those attributes when it runs. You can easily build the rule: if an LP-specific preferred return rate exists, use it; if not, use the fund-level default. That rule is defined once. It applies correctly and automatically regardless of how many LPs exist or how many side letters are negotiated over the fund's life.
There is no mechanism by which the formula can be accidentally misapplied, because the formula does not sit with the data. The two things are structurally separated.
Absolutely. And this is something we think about constantly. The architectural principle matters to practitioners in a very direct and practical way.
It enables auditability. When data and logic are separate, you can reconstruct any historical calculation with complete certainty. You know what data existed at a point in time, and you know exactly what logic was applied to it. That is what auditors need. That is what LPs exercising their audit rights need. That is what your own compliance team needs when something is questioned six years after the fact.
It enables safe collaboration. Multiple people can work on data - entering transactions, updating commitments, managing investor records - without any risk of touching the calculation logic by mistake. The separation is enforced by the system, not by discipline or convention.
It enables scaling. As a fund administrator adds more funds, more LPs, more complex structures, the logic engine does not get more complicated. It gets more data. The operational complexity that would otherwise multiply exponentially is managed structurally.
This is one of the most important questions to ask when evaluating a vendor and one we believe is not asked often enough.
There are tools on the market that offer a cleaner interface over an Excel engine. Others have built their own calculation layer but have not truly separated data from logic in their data model. In both cases, you carry the same structural risks, just with a more modern front end.
If the calculation depends on how data is laid out rather than on explicit, codified rules, you have the same problem in a different package. Changes to data still threaten the integrity of logic. Scaling still multiplies complexity. Auditability is still fragile, because there is no clean boundary to audit across.
At qashqade, we built the entire platform around this principle from the beginning. The separation of data and logic is not a capability we added later; it is the foundation everything else is built on. That distinction matters enormously when you need to stake your reputation on a calculation result.
Two questions will tell you almost everything.
The right answers to both questions should be no; and that no should be structural, not aspirational.
Because the time horizons are long, the structures are complex, and the stakes of every individual calculation are high.
A private markets fund runs for ten to fifteen years. The data it accumulates is enormous. The structures change over time, new LP classes, amended side letters, additional closes, GP transfers. And the consequences of a miscalculation are not a rounding error; they determine how much money an LP receives, whether GP carry has been correctly earned, and whether clawback provisions need to be triggered.
In private markets, a distribution miscalculation may take months to surface and years to resolve, if it is ever fully resolved at all.
The longer the time horizon and the higher the stakes, the more critical it is that the foundation of your calculation system is sound. Separating data from logic is not an implementation detail. It is the principle that determines whether your results can actually be trusted.