If you had asked me a year ago if I would rather have a root canal or compare ediscovery price proposals, I would have told you to break out the drill. Standardizing and deciphering the complex and opaque universe of ediscovery pricing can be a serious headache. It takes far too much effort and leaves buyers wondering if their estimates and projected total cost have any grounding in reality. And, let’s be honest: Sometimes there is some fairly sneaky stuff hidden in a proposal.

Let me debunk the top 15 pricing misconceptions that drove me (and my peers) nuts as an ediscovery buyer.

1. When a GB isn’t just a GB

Not all gigabytes (GBs) are created equal when it comes to expanded vs compressed data. There is nothing quite as stomach-churning as getting your first month’s ediscovery bill for processing 1 TB of data only to discover that your data volume and attendant costs are nearly double what was presented in the project estimate. I recommend always double-checking data expansion assumptions in each proposal, and normalizing them across all estimates based on your experience with the vendor’s promises vs the vendor’s reality. Or better yet, work with a company that charges on compressed GB volume (hint, hint).

2. Not-so-“all-in” pricing

Many providers entice new business with an “all-in” model that purports to provide transparent, predictable pricing — that is, until you read the fine print that makes it much less inclusive. In one truly jaw-dropping instance, a vendor proposal included a flat per-doc rate for “everything” from processing through production. But on closer inspection, they were also charging monthly hosting for reviewed and unreviewed documents, analytic fees, and a fee per tag of $0.10/tag/doc. When the ridiculous proposal was calculated using all the sneaky line items, it came in nearly 70% higher than the next closest proposal — and the vendor was not only double-charging, they were triple-charging!

3. Assumptions based on an alternate dimension

One area of a proposal that often gave me both heartburn and a laugh was vendor provided matter assumptions. When dealing with providers trying to inch out the competition, I frequently came across instances that lacked any grounding in reality. Whether it was a firm that claimed an expansion rate under 10%; the one that claimed their “amazing technology” garnered cull rates of over 95%; or a claim of historical deduplication rates of 75% and promotion sub 3%... it was all enough to make me shake my head. Needless to say, when normalized to appropriate standard parameters the too-good-to-be-true estimate suddenly was too bad to buy. Much like expansion rates, this is a good place to keep your own counsel and use metrics based on your experience.

4. Review rates for either the Flintstones or the Jetsons

Some firms offer unbelievable reviewer hourly rates — as much as 10-15% below market. This seems great until you realize they are committing to 35-40 documents per hour and/or capping reviewers from working overtime. In the event of a time-constrained review (aren’t they all), this poses substantial problems and can lead to budgets being blown. On the flip side of the coin, firms committing to leveraging tech and super reviewers to get rates double the industry standard should also be viewed with a degree of skepticism. If workflow, tech, and reviewers are able to yield substantially better results (88/docs/hr) then the provider should have no issue putting their money where their mouth is and guaranteeing it!

5. QC overload

Some proposals provide an estimate with line items for reviewer bill rate and QC rate without a total project estimate. Often, the QC rate assumption is far out of sync with actual need. On a matter that involves reviewing 1 million documents, the difference between a 15% and 25% QC rate amounts to over $80,000. I recommend finding a level that you are comfortable with as a starting point and determining with your provider if there is a need to exceed that amount.

6. Mandating too much PM and MR time

In proposals without a total projected matter estimate, the line items and assumptions around project management for electronically stored information (ESI) and review also pose some costly pitfalls. Whether it is a monthly minimum or assumptions in the stratosphere for PM usage, the impact on total project cost is substantial. I often recommend either procuring a prepaid volume of hours for the lifecycle of the matter or having a cap — which the provider must engage with you before surpassing.

7. Holding data hostage

This would make my blood boil: being all set to migrate a case (especially when due to vendor error) only to find that there was a line item in the statement of work stipulating steep charges to export or archive a database. It added insult to injury, especially when tensions were high enough to necessitate switching providers midstream. Similarly, some providers charged a headache-inducing fee to decommission a matter at the end of a case.

8. Charging for things I should be able to do myself

Whether it is adding tags (once I saw this for $100 a pop) or resetting passwords ($150 each), many providers charge clients for things that should be simple enough to handle themselves. This is both costly and frustrating because it often adds unnecessary steps and lag time.

9. Paying for cutting-edge tech but getting legacy

There are few things quite as annoying as when a provider claims that their “cutting-edge” solution can parse data and uncover evidence in a fraction of the time and cost — if you pay a premium. Yet, when you log into their system, it is the same legacy platform everyone is using (with the same dreaded spinning wheel of death). There may or may not be middleware to address system gaps, but in general it is just another case of beige vs. taupe — but the client is footing a larger bill.

10. Paying an arm and a leg for “advanced analytics”

At some point a few years ago, some providers began charging exorbitant rates ($150/GB) for analytic tools they used to offer for free. Basic functions like email threading were lumped in with advanced analytics, and if you wanted this workflow efficiency the only option was to pay (a lot). This money grab drove down adoption of more efficient workflows and defensible processes and increased the volume of data that had to be reviewed. This led to an entire cottage industry of lower-cost analytic middleware like Brainspace and Ayfie. I prefer working with tools that incorporate analytic costs into a single flat predictable fee.  

11. Per-doc review pricing that (surprise) escalates based on the number of tags

One way that providers maximize profitability on all-in price models is by offering an attractive rate for a matter stripped down to bare bones. In the highly likely event that your matter requires more tagging than simply responsiveness and privilege, many providers upcharge on a per-doc basis based on the number of tags. Even a moderately simple matter may incur a 20-30% upcharge per document based on necessary additions to the coding tree. This is rarely (if ever) clearly disclosed and often comes as a shock to a client who based their budget on the bare minimum per-doc rate.

12. Processing double dip

Another classic double dip is charging on both man hours and data volume for processing. A fair model ought to incorporate man hours for processing within the per GB or all-in pricing. Double charging like this is not industry standard barring an extremely bespoke workflow or very atypical data types and even then it is often incorporated into the processing cost. Another personal pet peeve was being charged for waiting time and/or machine time in addition to unitized costs.

13. Charging me for your incompetence

There are few things as annoying as an inefficient PM or data ops team that takes longer than expected for even simple tasks — and has the gall to charge you more for their ineptitude. Whether it is a bill for six hours of time to run a simple file listing report or claiming they can handle a certain atypical data type, failing, and then trying to bill nearly 100 PM hours, the result is infuriating.

14. Undisclosed different bill rates depending on the kind of review

Some review teams are composed of people with differing linguistic and technical expertise as well as team leads and project managers who may review in tandem with their management obligations. It is always irritating to receive a bill substantially higher than anticipated because reviewers of differing rates were used in an inefficient manner or for first level review, when their rates may be 2-3 times higher than a standard first level reviewer. Always clearly stipulate the roles and responsibilities of each category of reviewer at the front end.

15. Winning a managed review with AI, but then never using it

Some managed review providers use assumptions and estimates based on heavily leveraging advanced analytics — but after they win the matter, they proceed with business as usual liner reviews. The true shock was seeing the costs associated with leveraging the analytics in addition to the bloated managed review costs. Ensure that managed review partners clearly lay out their proposed workflow and that they provide metrics tracking to their estimated time and cost to completion throughout the review to prevent this unpleasant surprise.


At the end of the day, we have a long way to go to get truly transparent pricing between providers. Thankfully, there are some steps you can take in reviewing proposals and normalizing assumptions. Developing long-term partnerships with providers who understand this is a marathon not a sprint fosters greater transparency and understanding. Do not be afraid to push back on your provider and ask for a structure that works with your cost recovery models and mirrors the assumptions you see in your ecosystem.

When all else fails, bring in your ediscovery wonk (who likely has reviewed hundreds of these proposals!) to decipher any confusing language.


Get a free demo