The tactical edge… it ain’t no data center
Following up on our previous post (Unified computing is the key to the Tactical Edge) let’s take another look at what we mean by the “tactical edge” and why the right compute infrastructure is so important for the U.S. military.
The tactical edge is essentially the front line. It’s the forward-deployed area of operations (AO) where the military conducts combat operations, reconnaissance, or other specialized missions. This is where our troops are fighting, gathering intelligence, establishing command and control (C2), and making real-time decisions. We’re talking about remote, rugged environments with limited power, compute, and network connectivity.
This type of environment is where traditional infrastructure applications, like VMware, start to break down. VMware was built for data centers with plenty of power, networking, and storage. It requires multiple components to function properly, consumes a ton of resources, and has become more expensive since the Broadcom acquisition. Simply put … it’s overkill for the edge.
At the same time, the military can’t just get rid of the legacy applications and systems that are mission-critical and still running on VMs. What’s the answer? We need to find a way to support VMs in a modern, scalable, and lightweight way at the edge.
That’s what we’ll cover in the rest of this post.
Why Virtual Machines still matter at the edge
The U.S. military continues to rely on legacy applications for operating some of its most critical infrastructure (heck, there are still mainframes running out there!).
These applications were never designed for containers and probably never will be: think of applications like security scanning tools, logistics software, or C2 applications. Rewriting or containerizing all these apps would take years (if it’s even possible). But they still need to run and function in forward deployed environments.
That’s where VMs come in, letting you package up the servers and applications exactly as they are and move them to wherever you need them, even on small servers on the back of a Humvee.
But what’s even better than running just VMs is running your legacy VM workloads side-by-side with cloud-native containerized applications, all on the same platform. That’s where a solution like VMO shines.
Introducing Spectro Cloud’s Virtual Machine Orchestrator (VMO)
So, how do we run VMs in a way that makes sense for the edge?
This is where VMO comes in. It gives you the ability to run VMs inside a Kubernetes (K8s) cluster, which is a big deal, especially when you’re trying to unify your infrastructure without giving up your existing tools and applications.
VMO is powered by KubeVirt, an open source project. Spectro Cloud takes KubeVirt and wraps it into our Palette platform, which helps to simplify lifecycle management, provide a user-friendly UI, and handles all the orchestration behind the scenes. The result is a clean, easy way to manage VMs alongside containers, using the same workflows and automation that K8s provides.
Importantly, VMO is built to meet the needs of edge. It is:
- Lightweight. You don’t need dozens of services running to make it work. It was designed to be deployed on smaller infrastructure, and in disconnected environments. This is huge for disconnected, disrupted, intermittent, and low-bandwidth (DDIL) needs.
- Built for cloud-native workloads with K8s. Which means it will work with the broader cloud-native ecosystem of networking, observability, security, GitOps, etc. It’s an all-in-one platform.
- More cost-effective than traditional virtualization platforms like VMware or Nutanix. You’ll get a faster TCO. There are no bloated licensing models or surprise costs. You get a modern orchestration capability without breaking the bank.
VMO Architecture Overview
Let’s look at the architecture of VMO so we can understand how all the components fit together. This is key to appreciating why VMO works so well at the edge.
At the core of VMO is Kubernetes, KubeVirt, and Spectro Cloud’s Palette.
- Kubernetes is the orchestrator that handles all the scheduling, networking, scaling, and high availability for workloads.
- KubeVirt is the engine that enables VMs to run as pods with containers in the same Kubernetes cluster.
- Spectro Cloud Palette is the platform that ties it all together to automate cluster provisioning AND lifecycle management, and offers a user-friendly UI with GitOps-powered control.
Beyond those three components, VMO is expanded by a complete ecosystem of components that can be customizable for specific requirements or needs, but could include:
- Canonical MAAS for bare metal OS and Kubernetes provisioning
- Portworx Enterprise and Pure Storage FlashArray for high-performance, resilient storage
- Cilium for high-performance networking using eBPF
- Multus to attach VMs to VLANs (just like a VMware vSwitch)
- MetalLB for assigning IPs to services on bare metal
- Nginx for ingress routing
- Prometheus + Grafana for observability and real-time monitoring

The design allows you to run traditional VMs with direct VLAN access, or hybrid workloads where VMs and containers share the same overlay network. That kind of flexibility is a game-changer at the edge.
One really smart feature is live migration of VMs. Thanks to Portworx and KubeVirt, VMs can be moved between nodes without downtime — no special storage hardware required. And if you’ve worked with VMware DRS, you’ll appreciate that Kubernetes can now do similar balancing using the Descheduler, which automatically moves workloads to maintain cluster health.
You also get:
- Snapshot support for VM disks (CSI snapshot controller)
- Hotplug NICs and Volumes
- Declarative templates for reusable VM configs
- Full GitOps integration for version control and rollback
- VM Migration Assistant to migrate disks and compute from pre-existing VM Environments
It’s a tightly integrated stack that’s not only powerful, but also lean and manageable, even for edge environments where bandwidth is limited and automation is critical.
Operating with VMO
Once you’ve got VMO up and running, the day-to-day operations of it is surprisingly easy, especially if you’re already familiar with Kubernetes concepts. VMs are treated just like any other containerized workload. That means they’re defined in YAML, managed declaratively, and orchestrated by K8s.
Understanding the VMO resources
DataVolumes
A DataVolume is what’s used to manage VM disk images. Think of them as a template or an ISO where they help to bootstrap the VM by providing the underlying disk image. To do this, VMO uses the Containerized Disk Importer (CDI) to handle volume creation. You can upload an image using virtctl, and CDI will convert it into a persistent volume that your VMs can boot from. No more manual volume management.
Example YAML configuration of a DataVolume:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: example-iso
spec:
source:
http:
url: https://example.com/files/super-cool.iso
storage:
resources:
requests:
storage: 5Gi
The above example DataVolume would get picked up by the CDI, create an ephemeral pod which downloads the specified source, then uploads it to a local Persistent Volume to allow, and ensure, other pods and VMs have access to the source.
VirtualMachines
A VirtualMachine is a custom Kubernetes resource that represents a full VM instance. These VirtualMachines are treated similarly to a containerized workload. You define things like CPU, memory, disk attachments, and networking in the YAML spec, and VMO handles the rest. KubeVirt picks up the defined resource and runs it using Kernel-based Virtual Machine (KVM) all within the Kubernetes cluster giving you a complete cloud-native management experience.
VMs can run side-by-side with containers and use the same networks, monitoring stacks, and storage without the need for a separate virtualization stack.
Now that we’ve got the basics of VMO, its architecture, some of the resource definitions to run VMs in K8s, and how VMO simplifies that process, let’s take a look at what this would look like using a real-world example.
Deploying ACAS at the Edge with VMO
One of the most common cyber security tools we see in DoD environments is Assured Compliance Assessment Solution (ACAS), used for vulnerability scanning, configuration assessment, and compliance reporting. It helps inform commands and manage security risks for vital mission-critical systems. ACAS is incredibly important and plays a major role in an environmental security posture.
Typically you’ll see ACAS deployed on a full-blown VM in a datacenter, but with VMO you can deploy ACAS right at the edge, close to where your systems live.
Here’s how that works:
- Upload the ACAS ISO to your VMO cluster using virtctl and the CDI.
virtctl image-upload \
--image-path=~/CM-296052_acas-rhel-8.7-23.03_x86-64.iso \
--pvc-name=iso-acas-8.7 \
--access-mode=ReadOnlyMany \
--pvc-size=5G \
--uploadproxy-url=https://x.x.x.x:443 \
--insecure \
--wait-secs=240
- Create a VM that references the ISO as a bootable volume.
- Attach the volume just as you would a USB or CD-ROM. You can define this by either specifying the volume and device within the VirtualMachineInstance custom resource, or after creating a blank VM within the VMO Dashboard you can attach the pre-existing volume under Configuration > Storage > Add Disk.
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
labels:
special: vmi-acas
name: vmi-acas
spec:
domain:
cpu:
cores: 4
devices:
disks:
...
- cdrom:
bus: sata
name: acas
volumes:
...
- name: acas
persistentVolumeClaim:
claimName: iso-acas-8.7

Once deployed, ACAS can scan local systems, generate compliance reports, and operate completely offline, which is ideal for DDIL environments where mission systems still need to maintain their cybersecurity posture.
Hybrid workloads at the edge
So why run VM workloads in Kubernetes in the first place?
It’s a question we get a lot at Spectro Cloud, and it all comes down to flexibility, efficiency, and security. At the tactical edge, every bit of compute usage and power utilization matters. If you’re running both legacy applications and modern containerized services, why would you want to maintain two separate stacks? Having a unified platform to run both is far more efficient.
Traditional virtualization platforms, like VMware, were never designed for the edge where resources are limited. They’re heavy, complex, and difficult to integrate with modern DevSecOps workflows. And what’s worse is, they make it hard to modernize over time because they treat VMs and containers as separate worlds.
But with VMO, you can bring those two worlds together. You can run legacy VMs and cloud-native containerized applications side-by-side, managed by the same Kubernetes control plane. This gives you the ability to support mission-critical legacy systems without giving up the benefits of a cloud-native environment (speed, automation, resiliency, scale). Your legacy VMs will benefit from the same zero trust posture, GitOps pipelines, and networking and observability tools that all the other containerized applications get.
And at the edge that’s a big deal because this will give your teams more time to focus on the mission instead of learning and managing multiple tools, allowing them to build and deliver the software that supports the mission and the warfighter.
Benefits of cloud-native networking and observability
The beauty of running VMs inside Kubernetes isn’t just about running them together, it’s gaining the benefits of fully integrating them together. When VMs are orchestrated by Kubernetes, they inherit all of the same cloud-native benefits that “normal” containerized workloads already use.
Simplified networking
With VMO, VMs connect into your existing container network interface (CNI) just like any other pod. You can assign VLANs using Multus, route traffic with NGINX, and balance services with MetalLB. You don’t need to use any difficult virtual switches or some type of network appliance.
Unified observability
VMs can be monitored alongside your existing containers using the same tools like Prometheus and Grafana, which gives you a single dashboard to view everything running in the cluster. You don’t need a separate monitoring stack for VMs.
Zero Trust from the ground up
Since VMs in VMO live in the same security context as the rest of the K8s environment, you’re able to apply zero trust principles universally across the entire stack, including encrypted workloads, identity-based policies, namespace isolation, and more.
With VMO paired with Palette, you get a single unified platform where VMs and containers follow the same rules, which will lead to less complexity, better security, and a more seamless operational experience.
What are you waiting for?
Running Palette and VMO at the edge represents a major step forward for virtualization and cloud-native applications at the tactical edge.
It allows you to run traditional VM workloads in a lightweight way, in parallel with modern containerized applications on rugged, scalable, and cost-effective infrastructure typically found at the edge.
With VMO you get:
- A modern path off VMware that saves cost and reduces complexity
- A lean infrastructure stack that’s easy to deploy and manage at the edge
- The flexibility to support legacy and next-gen workloads on the same platform
If you’re looking to get off VMware, modernize your infrastructure, or just need a better way to run VMs at the edge, Spectro Cloud has you covered.
So what’s your next step? Book a demo with one of our experts.