Manual Testing Could Not Keep Pace with Release Cadence

The platform analyzes tissue slides for biomarkers that matter in cancer treatment. Algorithms score HER2, Trop2, CD8, H&E staining, and claudin proteins. A pathologist reviews the output, accepts or rejects the score, and the result feeds directly into therapy selection. The stakes are not abstract. Wrong scores mean wrong treatments.

QA was bottlenecked on image verification. Every algorithm update required visual comparison of slide outputs against reference images. Testers did this by hand, squinting at pixel differences, logging results in spreadsheets. A full regression pass took days. The team shipped in two week sprints, so testing ate most of each cycle.

The platform also ran on both cloud and on premise deployments. That meant running the same tests twice, in different environments, with different configuration quirks. The QA team was competent but outnumbered by the combinatorics.


Automation That Understood the Domain

Sequoia Applied Technologies is a Santa Clara software engineering firm with deep experience in life sciences and healthcare. The client brought us in because generic QA contractors kept stumbling on the domain. They did not know what HER2 was. They did not understand why algorithm output validation was finicky. They needed ramp time the schedule could not accommodate.

Our team took over the QA lifecycle: requirements gathering, test case creation, protocol execution, automation development, and release sign off. We worked inside the client's agile process, attending sprint planning, delivering within program increments, and documenting everything to the standard their regulatory environment demanded.

The automation itself was not exotic. Python as the backbone. Playwright for browser automation on the slide viewer. Selenium for legacy UI paths that predated the newer stack. Pytest for orchestration. The interesting bit was OpenCV for image comparison. Instead of eyeballing slide outputs, we wrote routines that compared algorithm results programmatically, flagging discrepancies that exceeded tolerance thresholds.


What the Automation Actually Did

The platform has a slide viewer where pathologists inspect algorithm output overlaid on tissue images. Verifying that the overlay renders correctly, that zoom levels work, that annotations persist, all of that was manual before. Playwright scripts now drive the viewer through its paces, capturing screenshots at defined states.

OpenCV handles the comparison. We feed it a reference image and a test image, compute the difference, and check whether the delta exceeds a threshold tuned per algorithm. HER2 scoring has tighter tolerances than H&E staining. The framework knows this.

Browser Automation

Playwright scripts exercise the digital slide viewer: zoom, pan, annotation placement, score acceptance workflow. Selenium covers older UI components not yet migrated to the modern frontend. Both feed results into a unified Pytest harness.

Image Comparison

OpenCV compares algorithm output images against reference baselines. Tolerances vary by biomarker type. The system flags regressions without requiring a human to stare at two nearly identical slides.

API Testing

REST API tests validate that backend services return correct payloads for slide metadata, algorithm status, and user workflow states. These run on every build before UI tests even start.

Multi Environment Execution

The same test suite runs against cloud and on premise deployments. Configuration is externalized. CI pipelines trigger the appropriate environment matrix on each merge.


From 4 Hours to 1 Hour

Work that previously consumed 4 hours of manual effort now runs in about an hour, mostly unattended. The 70% reduction freed QA bandwidth for exploratory testing and edge case investigation rather than rote regression. Defect escape rate dropped. Release confidence went up.

The client also gained reusable infrastructure. New algorithm types slot into the existing framework. When they added CLDN scoring, the automation scaffolding was already there. The team just wrote new tolerance configs and reference images.

Sequoia continues to support the platform as it adds new biomarkers and expands deployment footprint. The automation investment compounds over time.


Common Questions About Digital Pathology Test Automation

What is digital pathology and how does it support oncology diagnostics?

Digital pathology converts glass tissue slides into high resolution images that algorithms can analyze. In oncology, these algorithms detect and quantify biomarkers like HER2, Trop2, CD8, and claudin proteins that indicate specific cancer subtypes. A pathologist reviews the algorithm output, accepts or rejects the scoring, and the result feeds into treatment decisions. The platform Sequoia worked on functions as a companion diagnostic, meaning its output directly influences which therapies a patient receives.

What biomarkers does the platform analyze?

The platform runs algorithms for HER2, Trop2, CD8, H&E staining, and CLDN (claudin). Each stain type has its own algorithm tuned to identify specific proteins in tissue samples. HER2 status, for example, determines eligibility for targeted therapies in breast cancer. The algorithms produce a score that the pathologist can accept, reject, or escalate to another pathologist for a second opinion.

How did Sequoia reduce QA effort by 70%?

The team built automation using Python, Playwright for browser testing, Selenium for legacy workflows, and OpenCV for image comparison. Previously, verifying that algorithm outputs matched expected results required manual inspection of slide images. The automated framework compares images programmatically, validates REST API responses, and runs regression suites across cloud and on premise deployments. Work that took 4 hours dropped to about an hour.

What does the QA lifecycle look like for a regulated diagnostics platform?

Sequoia owned QA from requirements through release sign off. That included test case creation, protocol execution on cloud and on premise environments, automation script development, and documentation for each program increment. The platform ships to clinical environments, so every release had to meet strict standards for performance, usability, and reliability before sign off.

Why did the client choose Sequoia Applied Technologies?

Sequoia Applied Technologies has worked with life sciences and healthcare companies for years. That domain experience matters because diagnostics platforms have regulatory constraints, specialized terminology, and workflows that general purpose QA teams do not encounter. The client needed engineers who could get productive quickly without a long ramp on the problem space.

What technologies did Sequoia use on this project?

The automation stack included Python as the primary language, Playwright for modern browser automation, Selenium for older UI workflows, Pytest for test orchestration, OpenCV for programmatic image comparison, and REST API testing via Postman and custom frameworks. The platform itself ran on both cloud and on premise deployments, so tests had to work across both environments.