Another Kindly Ops Success Story:
Gritstone Oncology
A Personalized Approach to Cancer Treatment
Who is Gritstone Oncology?
Gritstone Oncology is advancing the field of immuno-oncology to fight cancer in patients with the most difficult-to-treat tumors. The company’s potent, next-generation, personalized immunotherapies harness the power of the patient’s own immune system to effectively destroy tumor cells through the recognition of tumor-specific neoantigens. To support its research and clinical operations, Gritstone needs a high-performance computing infrastructure capable of providing both the flexibility that its scientists need to perform their research and the control needed for clinical use. These seemingly opposite requirements have traditionally mandated separate infrastructures within a company’s own data centers.
Challenge | Solution | |
“None of the available third-party platforms for genomics analysis were optimal for us. All required adaptation for our protocols and committing to a proprietary architecture up front. We ultimately decided that, to have full control over our own destiny, we would need to build and quickly enable our own cloud environment.” | AWS Organizations and AWS Identity and Access Management (IAM) are used to provide granular access control and enable the separation of development, test, research and production environments. Confidence regarding security posture is enhanced through the use of AWS CloudTrail, which generates logs that are analyzed by security information and event management tools like Sumo Logic and DataDog. | |
Build a GxP-compliant analysis system in the cloud, as immutable infrastructure. | Compute pipelines run on Amazon Elastic Compute Cloud (Amazon EC2) clusters, storage for which is provided by Amazon Elastic File System (Amazon EFS) and Amazon Simple Storage Service (Amazon S3). | |
Provide a familiar environment for computational biologists—namely, a clustered computing environment with batch-based job scheduling, low-latency, high-speed interconnects, and large shared storage volumes. | On-demand compute power is provided using Amazon EC2 instances of various sizes. A Nextflow-driven shared job queue dispatches jobs and handles data flow, providing a logical orchestration engine for the analysis pipelines. The optimal quantity and type of compute resources (e.g., CPU- or memory-optimized instances) are dynamically provisioned, based on the volume and specific resource requirements of the pipeline jobs submitted. | |
Provide controls required for clinical operations, including the ability to account for all configuration changes. | AWS CloudFormation automates the provisioning of core infrastructure for each analysis environment. Jenkins pipelines that employ Packer and Chef Solo are used to automate the building of Amazon Machine Images (AMIs) and Docker container images. “We establish our baseline server environment in AMIs,” explains Clark. “For toolkits that are particularly challenging to integrate we capture those in Docker images.” |