Search
Close this search box.

How to Fix The Productivity Problem With eDiscovery Cloud Computing

Four Ideas To Improve Reviewer Performance, Morale & Profitability

For about the last 10 years or so, there has been a lot of talk about how the cloud would transform eDiscovery. The idea is simple enough. Instead of loading data into on-prem systems, everything gets loaded into the cloud. Reviewers then use web browsers to complete their work. It’s a nice idea, a grand vision even. But there’s a problem. Once your data sets exceed a couple hundred gigs, reviewer productivity grinds to an excruciatingly slow pace. The more data you add, the worse it gets.

This is due to something I have termed IPE: Inverse Performance Equation. IPE is having a major negative impact on organizations who engage in eDiscovery work in the cloud. Numerous businesses that I know of personally have opted for a hybrid approach, maintaining both on-prem and cloud-based eDiscovery environments. The financial consequences to them, let alone security and maintenance considerations, are quite significant. They are literally paying for two eDiscovery environments. If you or someone you know about has struggled with these issues, I’d like to present four ideas that can really help. These ideas are cost-effective, secure and almost completely eradicate the IPE problem.

Who Should Consider These Ideas?

Organizations that earn income from doing eDiscovery work, or that are entrusted to do that work on behalf of another entity such as auditors, can benefit from these ideas.  By my estimation, this could include:

  • eDiscovery service providers
  • Law firms
  • Corporations, government agencies and other stakeholders who prefer to manage eDiscovery themselves
  • Consultancies with a core competency or practice area in eDiscovery and investigations

These types of entities engage in eDiscovery work and often deal with data sets measured in the hundreds of gigabytes, if not more.

What Is IPE?

Before I present my four ideas for improving reviewer performance, I’d like to provide some greater context and insight about the IPE problem. I feel like I need to do this because it’s not very well understood right now and some organizations haven’t even encountered it yet, although I’m quite sure they will at some point. I want to say a few things for the record.

First, I do believe that cloud computing is the future of eDiscovery. I cannot say how long it will take for cloud to take over on-prem, but I have no doubt that this will happen. It’s only a question of when, not if. Cloud computing is simply too attractive and too cost-effective to be relegated to the sidelines for long.

Second, I see two major hurdles that eDiscovery functions will have to overcome to make cloud computing a reality: perceived security concerns and the IPE problem. In another thought piece, I’ll talk about how the cloud can be made as secure, or nearly as secure, as on-prem operations. But I do think that as long as some clients perceive the cloud to be less secure than on-prem, this will be a driver in cloud adoption. For now, I see this as a secondary problem in cloud adoption, not the primary problem.

Third, the IPE problem is THE major problem confronting eDiscovery functions. Until this problem is solved, I don’t believe that cloud will ever achieve the level of adoption that pundits have talked about. In my experience, only the most select and sensitive clients will insist that their data not be put into the cloud. This means that most organizations are open to or ambivalent about whether or not their data is reviewed in the cloud. That makes IPE the true blockade to cloud adoption in eDiscovery.

Given this significance, I think it’s wise to understand how IPE actually impacts eDiscovery functions. To help us in this area, I’d like to offer a clear definition of IPE:

Once data sets exceed a certain threshold, usually 150-200 gigs, for every gigabyte of data that gets added to a matter, reviewer speed degrades by an equal measure.

I think of this as a tipping point. Once you cross a certain threshold, reviewer speed begins to slow. As more data gets added, performance continues to degrade until it becomes nearly untenable for actually getting work done. Here are the tell-tale signs of IPE:

  • Document to document navigation speed slows down dramatically. This degrades reviewer productivity because they cannot quickly jump from one document to the next.
  • Coding saves and issue designations become very hesitant and halted. This often happens when a hot-doc becomes unresponsive. What should be a nearly instantaneous task can completely lock up the application.
  • Search speeds become very inconsistent. Sometimes queries are returned quickly and other times the reviewer cannot tell if the search is actually happening because the system does not seem to be doing anything.
  • Reviewers encounter the spinning wheel of death. This seems to happen at the most inopportune times and for reasons no one seems to understand.
  • Reviewers groan. This is literally what you can hear when IPE rears its ugly head. Reviewers become so frustrated that they verbalize it in quite colorful language.

Four Ideas To Improve Reviewer Performance In The Cloud

Your organization may not yet have encountered the debilitating effects of IPE, especially if you have not deployed a cloud-based solution for eDiscovery yet. So if you are considering cloud computing for eDiscovery, these ideas can benefit you by helping you avoid IPE before it even becomes a problem. If you’ve already deployed a cloud solution for eDiscovery and have not yet encountered IPE, it’s probably only a matter of time. If you have deployed the cloud and have encountered IPE, these ideas could benefit you right away:

  1. Document how much IPE is impacting your reviewers and costing your organization.
  2. View your eDiscovery environment as a discrete set of technology tiers.
  3. Segment the Processing, Data and SQL tiers from the web application tier.
  4. Run a pilot program to ensure the performance is viable to your requirements.

Document How Much IPE Is Impacting Your Reviewers And Costing Your Organization

My recommendation is that you begin by documenting how much IPE is actually impacting reviewers and, therefore, likely costing you as an organization. In my experience, there are two types of IPE impact: non-quantifiable and quantifiable. The non-quantifiable impact can often have greater negative consequences than the quantifiable impacts. Let me explain.

The non-quantifiable impacts of IPE are usually about the frustration and anxiety of your reviewers. Most reviewers are graded on productivity metrics that determine their standing within an organization. When reviewers achieve or exceed productivity goals, they usually don’t worry about job security. But when their productivity dips, their anxiety levels jump.

When reviewers cannot meet their productivity goals because the system is too slow (because of IPE), you have a significant problem. Morale goes down. Reviewers get nervous and start asking themselves if they should be looking for another job. In the age of the big quit, IPE could be a contributor to losing talent you want to retain. This is why I say the non-quantifiable impacts may be more negative than the quantifiable impacts.

You might find it acceptable to experience degraded performance and financial outcomes as data-sets increase. You probably won’t find it acceptable to lose talent because of IPE. One way to address this is to run employee satisfaction surveys that include questions related to system performance enabling job performance. If this metric consistently indicates trouble, it is quite likely that IPE will begin costing you employees.

Now let’s discuss how to document the quantifiable impacts of IPE. To help with this, I’d like to draw upon a mock organization I’ll call ABC Legal. To make this example simple, I’ll make the math simple too. Just to be clear, this math may or may not be similar to what your firm experiences. But the economies of scale will likely be quite similar, especially if your firm, like ABC Legal, is 100% cloud-based:

  • Last year, ABC legal handled 100 matters. 65 of those matters had fewer than 100 gigs of data. 20 matters had 100-200 gigs of data. 15 matters had more than 200 gigs of data.
  • ABC Legal has 7 reviewers and they are paid, on average, $1,000 per day per person.
  • ABC Legal established a productivity goal for reviewers of 1,000 documents per day.
  • On every matter with fewer than 100 gigs of data, the 1,000 documents per day target was realized. On matters with 100-200 gigs, productivity dropped to 750 documents per day. On matters with more than 200 gigs, productivity dropped to 500 documents per day.
  • As you can see, the IPE impact is essentially a doubling of the cost-per-reviewer-per-day. ABC Legal pays $1,000 to have 1,000 documents reviewed when data sets are less than 100 gigs. But when data sets exceed 200 gigs, ABC Legal pays $1,000 to have 500 documents reviewed. This is a very common financial scenario I see in play today.

If your organization has already adopted the cloud, I would wager that an analysis like this will probably uncover similar financial results. IPE begins to cost you in real dollars as soon as data sets exceed 150 gigs or thereabouts.

View Your eDiscovery Environment As A Discrete Set Of Tiers

Most people think of eDiscovery as an application. While the application is important, eDiscovery is also an ecosystem of interconnected devices. Even though there is a lot of complexity to the ecosystem, I tend to think of it as four discrete tiers:

  • Web server tier where application access happens and where application agents are applied to handle a variety of tasks.
  • Processing tier comprised of dedicated servers to handle data processing, analytics processing and compute intensive tasks. For example, data processing and data productions are both compute intensive tasks.
  • Data tier includes data storage systems and storage management tools.
  • SQL tier comprised of SQL databases running on discrete servers.

I didn’t include disaster recovery (DR) and offsite backups in these tiers because most organizations handle DR as an offshoot of their data tier. In some instances, a DR site is simply a hot backup of the current data sets on storage systems in an eDiscovery ecosystem. Other times, DR is a cold copy of recent data sets that can be made operational within a defined time window, anywhere from a few hours to a few days. Underneath this ecosystem is a secure network that connects all of the devices and applications for seamless interoperability.

For at least two decades now, the assumption has been that all of these tiers need to be physically close to each other—either in the Cloud or on-premise. But I’m not convinced of that at all.

Segment The Processing, Data And SQL Tiers From The Web Application Tier

What I’m about to suggest is primarily an architectural consideration, not a technology consideration. In other words, you may not need to buy a lot of new technology to make this approach work. Here are my ideas for how to segment these tiers:

  • Web server tier. This gets placed in a public Cloud so users can access the application seamlessly and securely. Public cloud environments can be made highly secure. This tier lends well to burstable on-demand resources that cloud providers are exceptional at.
  • Processing tier. This could get placed in a public Cloud. Or it could be in a private Cloud or a secure on-premise location, depending on the application requirements. This would fall under a mixed-use category. Certain aspects of processing lend well to the scalability and burstability of clouds, while analytics may quickly become cost prohibitive.
  • Data tier. This gets placed either in a private Cloud or in a secure on-premise location. The distance between the web server tier and the data tier generally should be within the same metro area to prevent response latency.
  • SQL tier. This gets placed in the same location as the data tier, in most instances. This is the single greatest dependency for an all-around performant eDiscovery environment.

The goal here is to uncouple the web tier from the processing, data and SQL tiers. Those three tiers tend to operate best when they are tightly coupled. But they don’t need to be on the same premise as the web application tier. Why should you consider this?

In our experience, after having conducted audits on hundreds of eDiscovery environments, the root causes of IPE can be addressed simply by segmenting the web tier from the processing, data and SQL tiers and then joining them with a very high-speed and secure internet connection. This solution works. It allows for exceptionally fast read-write activity to the database while also allowing users to log-in to the system from anywhere.

Run A Pilot Program

If you’d like to experiment with this approach at your organization, here are some suggestions for doing so. I recommend both quantitative and qualitative pilot programs.

Quantitative:

  • Design and deploy a new environment based on the tiers I’ve outlined above.
  • Pick a few active matters with various data-set sizes: large workspace data set (200 gigs), middle of the road (100 gigs) and a small matter (25 gigs).
  • Migrate the data to the new environment.
  • Run a comparison test on the data sets in your existing and new environments. Take a sample of performance metrics from both environments for these types of activities: searches, index builds, production speeds – any of the common functions and tasks.
  • Document your findings and compare the performance of the two environments.

Qualitative:

  • Select a handful of reviewers – they should have experience with the older platform. We like to select “vocal” reviewers who have not been shy about giving feedback.
  • Provisions licenses for them in the new environment and provide what little training is necessary.
  • Ask them to run a series of tasks that they normally would run during their workday. Limit this to about 2 hours of activity.
  • Interview the reviewers and get their qualitative feedback. Did they find the experience to be fast, efficient and intuitive? Did they run into any snags?

With these two pilot programs in place, you’re likely ready to roll out the new environment with a much higher degree of confidence, which has a direct impact on how ardently your business development people sell the solution. Why should you do this?

  • You will have proof, not assumptions, about the performance of the new environment. This will give you and your team confidence that the new approach is superior.
  • You don’t want to migrate all of your production data until you’re confident that the new environment works as you wish.
  • You don’t want to train and migrate users until you’re confident.
  • You don’t want to create a differentiation marketing program for your eDiscovery function until you are 100% confident that it will work the way you want it to.
  • You don’t want to set expectations with internal stakeholders about improved productivity and reduced costs until you can prove to yourself that the environment delivers those benefits.

Where To Go From Here

In this thought piece, I’ve put forward four ideas for how to address IPE — the number one obstacle to the adoption of eDiscovery cloud computing. My ideas are primarily about architecture, not new technology per se. By uncoupling the web tier from the processing, data and SQL tiers, your reviewers are likely to see a massive productivity boost on matters that used to cause them to groan. If you have questions about any of the ideas I’ve put forward here, please know that I’m open to the conversation.  

Scroll to Top

Talk to an Expert

"*" indicates required fields

Your Name*
Hidden
This field is for validation purposes and should be left unchanged.

We use cookies to personalize content and provide you with an improved user experience. By continuing to browse this site you consent to the use of cookies. Please visit our privacy and cookie policy for further details.

Nico Van der Beken

ADVISORY BOARD MEMBER

Former Big 4 Partner and renowned forensics expert Nico Van der Beken is a key member of our Advisory Board. Following a distinguished career assisting major law firms and corporations involved in criminal, civil, regulatory, and internal investigations as a partner at KPMG Switzerland, Nico today provides advisory services to global eDiscovery businesses. Employing his specialized knowledge in Investigations, Intelligence, Diligence, Digital Forensics, Cryptocurrency Forensics, Data Analytics, eDiscovery, and Cyber Response, Nico provides expert insights into the European market and steers strategic growth for GeorgeJon.

In an industry where knowledge is power and experience begets performance and profitability, GeorgeJon is constantly absorbing and documenting real-world solutions to proactively improve client systems. Tapping the knowledge of a 25-year industry veteran augments this knowledge base with a client-side focus and market-specific insights. A leader of Forensic Technology teams at PwC, Deloitte and KPMG, and a co-founder of the Swiss office for Stroz Friedberg, Nico aligns GeorgeJon’s proven solutions with client expectations and needs.

Nico is also the co-founder of Undecom, the first global internet search platform specifically designed to congregate investigators, forensic experts, detectives, intelligence professionals, security experts, and customers from all over the world. He holds an Executive MBA in Technology Management from the Université de Fribourg and a Master of Science in Industrial Sciences from Hogeschool West-Vlaanderen.

Amy Mejia

SENIOR DIRECTOR, HUMAN RESOURCES

Amy Mejia has spent her career enhancing people operations and leading strategic HR initiatives for growing companies across a wide range of industries. She develops and evolves GeorgeJon’s HR processes and programs on a daily basis, including talent management and development, employee engagement, compensation/benefits, and much more. She is perpetually focused on helping GeorgeJon achieve ever-evolving goals by optimizing company-wide productivity and satisfaction.

Amy holds a Bachelor’s degree in English from Northeastern Illinois University, a Professional in Human Resources (PHR) Certification from the HR Certification Institute, and is a Society for HR Management (SHRM) Certified Professional (CP). She is a Chicago native and mother of two young boys.

Kaya Kowalczyk

SENIOR DIRECTOR, MARKETING

Kaya drives GeorgeJon’s marketing strategies and initiatives. She is responsible for overseeing all aspects of marketing, branding, and communications to enhance the company’s visibility, attract target audiences, and support business growth. Works closely with the executive team and collaborates with cross-functional departments to achieve marketing goals and ensure alignment with the company’s overall objectives.

During Kaya’s 18 years at GeorgeJon, she has excelled at myriad technical and business roles, developing a comprehensive understanding of GJ’s operating model while implementing programs that nurture the sustainable growth and healthy maturation of the organization. 

Reynolds Broker

CHIEF OF STAFF

Reynolds is the primary advisor, spokesperson, and tactical right hand for the Executive Team (Founder, COO, CTO). As an innovative strategist, consultant, and implementer, he spearheads the successful execution of mission-critical projects and strategic initiatives across the organization, specializing in organizational alignment, business operations governance, and marketing/communications management. His diverse professional and educational experience is rooted in the technology, corporate finance, and government affairs sectors.

Reynolds holds an International MBA in Corporate Finance and Spanish from the University of South Carolina and a bachelor’s degree in International Affairs from the University of Georgia.

Darrin Hernandez, CPA

VICE PRESIDENT OF FINANCE

Darrin Hernandez is the Vice President of Finance for GeorgeJon, responsible for ensuring corporate financial vitality, including accounting strategy, cash flow, reporting, forecasting, budgeting, and legal/insurance/tax compliance. Possessing a unique background that meshes accounting & finance expertise and executive management with emerging technology initiatives, Darrin is uniquely qualified to bring stability and foresight to GeorgeJon’s financial endeavors.

Over the course of his twenty-year career in corporate finance and accounting, Darrin has established himself as an authority in tech-enabled services and SaaS businesses. Prior experience in the cyber-security, bookings management as an online marketplace, and digital transformation consulting spaces provided invaluable insights for anticipating and adjusting to the ever-changing landscape that permeates the tech industry. Being nimble, adaptable, and prepared is necessary to deliver stability for fast-growing companies, and Darrin is the man with the plan.

Darrin has a B.S. in Accounting from Northern Illinois University and is a Certified Public Accountant. He lives in Chicago with his wife and two kids.

Allison Jessee

CHIEF REVENUE OFFICER

Allison Jessee is the Chief Revenue Officer at GeorgeJon. With 20+ years of experience in sales, account management, and customer success, Allison has demonstrated a profound commitment to driving growth and success for both GeorgeJon and its customers. She delivers deep industry knowledge, strategic vision, and an endless passion for innovation to guide customers through the complexities of data ecosystems while future-proofing operations.

Allison’s expertise in sales automation, strategy and sales execution, and customer relationship management makes her the ideal leader to guide GeorgeJon’s revenue growth.

Formerly Vice President of Customer Success at GeorgeJon, Allison led a team of customer success managers dedicated to optimizing eDiscovery ecosystems and data management solutions for some of the world’s leading law firms and corporations. Her collaborative approach with the sales, marketing, and tech teams has been instrumental in developing and executing strategies that have increased customer retention and satisfaction.

Prior to her tenure at GeorgeJon, Allison was the Vice President of Customer Success and Account Management at UPSTACK, where she played a pivotal role in launching and scaling a cloud-based platform for IT infrastructure services. Her experience also includes serving as the Director of Client Engagement at HBR Consulting, where she managed a diverse portfolio of clients in the legal industry and delivered strategic and operational solutions for Data Center, Network, and eDiscovery Hosting.

Ryan Merholz

VICE PRESIDENT, ENGINEERING

Ryan Merholz is the Vice President of Engineering at GeorgeJon. An experienced eDiscovery industry veteran, Ryan oversees our support, professional services, and security programs to ensure world-class customer experiences for our global client base.

Ryan’s service acumen and technical expertise was honed over 15+ years in the eDiscovery realm at Relativity, where he built and led customer support/success, program management and consulting teams. He led the transition of Relativity’s support organization to the cloud and evolved their approach to customer success management for service providers. He is also a passionate advocate for workplace inclusion, diversity and belonging.

Ryan has a B.S. in Electrical and Computer Engineering from Ohio Northern University and lives in the Chicago suburbs with his family. When not working, Ryan enjoys going to the theater, trying new restaurants, and walking his dogs.

Tom Matarelli

CHIEF SALES OFFICER

Tom Matarelli is the Chief Sales Officer at GeorgeJon. A proven eDiscovery innovator, thought leader, and community contributor, Tom’s leadership skills, global perspective and technical expertise provide deep knowledge to our global customer base. He brings 15+ years of experience in Governance Risk Compliance and Legal Technology to the GJ Leadership Team. 

Tom has held leadership roles at multiple eDiscovery technology providers, including Relativity, Vertical Discovery / Ligl, and Reveal. Starting his career as a CPA, Tom quickly moved into forensic accounting and investigations, eventually focusing on forensic technology for eDiscovery. He migrated this knowledge base to the software market, joining Relativity to build and lead their global advisory practice. He has helped law firms and corporations adopt AI-based workflows for eDiscovery, investigations, audits, and corporate compliance.

Tom holds a BA in Accounting and Marketing from Western Illinois University and an MBA in Finance from the University of Chicago Booth School of Business. He is active in the local community, mentoring Chicago Public School students and coaching little league baseball.

George Orr

COO / CHIEF OPERATING OFFICER

George Orr is a transformational leader who informs and drives the day-to-day operations of GeorgeJon. Working in close partnership with George Nedwick, CEO, he strategizes and implements both daily and long-term initiatives for the business.

Orr held multiple executive roles at Relativity, leading customer teams focused on support, professional services, customer success, and the growth of the certified professional community throughout his tenure. Orr was an original member of the Relativity “go-to-market team” in 2007, and helped grow the company in revenue and employees (5-1500). Orr brings his operational expertise and understanding of the eDiscovery customer landscape to the GeorgeJon team.

When not in the office, George can usually be found at a Pearl Jam concert or taking on new adventures with his family.

George Nedwick

CEO / FOUNDER

George Nedwick is the founder, owner, and principal architect of GeorgeJon (GJ). Under George’s leadership, the company has grown from an IT startup to an internationally acclaimed industry leader serving a global client base.

George is a world-class systems architect who has spent fifteen years perfecting a performant, scalable, modular eDiscovery framework that can be replicated and managed on a universal scale. Recognizing a deficiency in technical expertise, storage capabilities, and cost-effective oversight within the eDiscovery industry, George methodically built a team to address this challenge. This includes forging partnerships with hardware manufacturers (Dell), software providers, and leading industry software providers to develop best practice methodologies for optimized infrastructure, specifically designed to meet the demanding needs of eDiscovery users.

George has developed clients in multiple vertical markets, including multinational corporations, leading law firms, government agencies, consulting firms, and premium service providers. He has proven expertise in working with sensitive/classified data and is well versed in navigating complex international data export laws. George has also moved the firm into creation and delivery of proprietary hardware, specifically monitoring appliances that can be placed at client sites to allow for remote access and 24/7 monitoring of all infrastructure components.