GE Healthcare: A next-gen architecture that scales across thousands of locations
The writeup that follows is an edited transcript of a session that GE Healthcare delivered at KubeCon Europe.
You can watch the recording of the full session here 👉
Meet Benjamin Beeman
“I’ve worked at GE Healthcare since 2011, on various teams including diagnostic cardiology, PET/MR, CT Recon, and ultimately the healthcare platform team. The platform team is responsible for building common solutions for all areas of GE. For example, I worked on a common viewer for DICOM medical images across CT, X-Ray and MR.
Now, I’m focused on Edison, our intelligence compute platform. I create Edison health services on our on-premises platform we call our Edison Health Link; these services can be deployed on the Health Link or in the Edison cloud. The services cover things such as CT, MR, patient monitoring and many more.”
Challenges of taking healthcare to the edge
Whenever we’re trying to transform technology in the healthcare world, it’s important to remember some of the limitations in this space.
Local regulatory requirements, privacy and uptime
Depending on where you are in the world, there are regional regulations and restrictions. That’s part of the reason why we have the requirement to be able to run without a direct connection to the cloud. We need to be able to deploy and fleet manage with a central source of truth in a disconnected world.
We’ve mentioned Kubernetes on the edge. And in Edison, this is what we mean by edge. In our platform, which contains K8s and standalone VM deployments, this is deployed on physical hardware or in data centers that are located at our client sites, which are medical facilities. I also refer to ‘on-premises’ deployments. All of these terms describe the same thing. This is a local edge location and it can become disconnected from services that are hosted at local cloud providers. It needs to be fully capable of running independently if disconnection occurs.
There’s also other concerns when we’re monitoring systems and taking data off the system. We have to pay close attention to PHI, which is private health information about a patient, PII, which is personally identifiable information about the patient, and any information that can be correlated together to produce some PHI. Different regions around the world have different unique regulations when it comes to medical devices and how you deploy in the area. So we have to deploy solutions that are flexible, capable, and configurable to meet the needs of a diverse world. Our technologies deployed into the hospital environment are life-critical applications, life-saving applications. When they’re up and running, they’re increasing outcomes, they’re making outcomes better. Ensuring that we have proper uptime is crucial to the success of the technology in the hospital world.
Getting close to the patients
Proximity to the patient might not seem relevant, but the closer your compute devices get to the patient, the more costly they become. The higher the regulations, the more scrutiny they face.
This proximity can be both to the patient and to the scanner devices themselves. So if we’re looking at an MR machine, a CT machine, sometimes these solutions need to be deployed close to the scanner, and this can be related to a performance issue, because we’re pulling so much data off the scanner as we’re monitoring patients, that the latency for the operation would become unacceptable for the treatment if we moved further away. As you get further away from the patient there are latency issues, but also more opportunities for communication failure.
Long term support
In our highly regulated software environment, we need long term support. Typically two years of development support, then an additional three years of production support as we deploy the solution in the field.
The type of support that we need is not just to take the latest application where they’ve fixed some issues, we need support with the specific version we’re using, because to incorporate a new version can create a whole new development cycle — and with the regulations and rigor that we need to follow, this can be too costly to roll out a bug fix or a patch for an ongoing production piece of software.
We’re trying to bring some order to a real melting pot of innovative technology. It expands to clouds and data centers, plus thousands of edge devices at hospitals with myriad diverse application deployments.
Key requirements for K8s fleet management
When we went looking for a fleet-management solution and came across Spectro Cloud, there were four basic buckets of challenges that we were trying to solve.
For the lifecycle of our products in the field, we needed a central place to manage system state, and a central source of truth for that state for all of these devices.
We also needed to support configuration for every level of our software stack, including commercial, open-source and internal software.
And for data center and the on-premises edge solutions, we needed a simple, lightweight solution to have a software repository locally, which was not only able to store every type of software we were delivering, but was also lightweight, and we didn’t have multiple solutions in order to get the feature effectiveness we needed.
And, lastly, we needed the ability to consistently deploy and manage across every environment, whether that be an appliance, a data center or a cloud, so I can deploy an application, add-on services, platform services and infrastructure, in a supported common way without reinventing the wheel every single time.
A single source of truth for every scenario
For the lifecycle management of this stack, we needed to manage installs, patches and upgrades from a central source of truth for over a thousand edge locations. For each system, every system would have a cloud configuration that I can go to, that becomes the central source of truth for what needs to be installed on that system, and what versions of those things are installed on that system.
We have compute resources, we have software components and we have different types of connectivity per site. Some sites are intermittently connected, have poor connectivity, or even have no connectivity at all, and we still have to support that. We don’t want one-off solutions for every single scenario, we want a centralized solution that manages all of these things.
Lightweight local repos for the software stack
And with all of these multiple system deployments, how do we fleet manage this, how do we ensure a system state? We need to do this across all the types of software in our stack. Internal, commercial, and community open source. Our solution deploys containers, the OS, and VMs. Multiple support for many different types of artifact.
And if we’re going to have all these different types of software, we’re going to need a single lightweight solution to store these things on premises, because if I don’t have sufficient connectivity I can’t necessarily download the software every time I want to do an install or launch a container pod. I have to store the software locally, but I can’t have that software repository take up a big part of the resources on the local system — remember, resources on a local on-prem device are the most costly resources. It’s prime real-estate!
So, again, we need a simple, single solution that’s lightweight and can provide these software artifacts, any type of artifact that I’m going to deploy with this system, and then manage them in a very lightweight way with the smallest possible footprint.
A single pane of glass for all deployments
We’ve talked about many of the challenges of deploying applications in healthcare environments, but most of all when we went out looking for fleet-management solutions, we needed to wrap up the deployment, upgrade and system state in a single solution for ALL of our deployment targets.
I want to work through a single pane of glass to configure my component whether it’s Helm-based or VM-based, my infrastructure, my K8s layer, overall my lifecycle management, in a common, standard way that we can share with our modality partners within GE Healthcare and they can configure the dynamic applications that they have to deliver on the Edison platform.
Palette meets Edison
Now let’s bring it all together. We talked about all the challenges we were facing and what we were trying to solve. Now we bring in Spectro Cloud. Its Palette platform answered a lot of things for us in this fleet management area. With Spectro Cloud’s declarative model, we can package myriad software, containers, VMs, in a common packaging solution using Cluster Profiles.
The premise that Palette is built upon allows for a disconnected system. I can completely be disconnected from the Spectro Cloud Palette SaaS, I can configure my system through that SaaS, take that configuration, apply it to a completely disconnected system, and the management onboard the system will perform my installs, and keep that system state consistent with the specifications that I can pull out from the central, GitOps-driven fleet management solution.
For our on-premises software solutions and storage, Spectro Cloud has deployed the open source harbor, but it has enhanced the harbor with a Spectro Cloud proxy so that we can use all the wonderful capabilities of the lightweight harbor solution to support a software repository for all of our types of software. And through this proxy having a seamless integration, Spectro Cloud has a flexible model that can support different system configurations.
All of this comes together to help reduce cost and complexity — and reducing cost in healthcare solutions can reduce the overall cost of healthcare, allow more solutions to be delivered, to help more patients, and this can ultimately provide better patient outcomes. As Spectro Cloud continues to become more flexible, on-premises and in the cloud, it becomes the glue that Edison uses for a common cloud-native solution, both on-premises and in the cloud.