Eight capabilities. One operating model.

01

Define once, run across every node

2k8s exposes a Kubernetes API endpoint from the public cloud — a single point of orchestration that abstracts distributed compute nodes into one logical cluster, the same way managed Kubernetes works on public cloud, but extended across edge and on-premises locations. Workloads, configurations, and policies are defined once and reconciled continuously across the footprint through native Kubernetes mechanisms: state reconciliation, workload scheduling, self-healing, and configuration propagation.

At the same time, the platform provides granular control where it matters. Operators can target specific nodes or groups of nodes for workload placement, staged rollouts, and per-location configuration — all within the same API surface. The model is intent-driven at the platform level and precise at the operational level.

02

Built for the edge, not adapted for it

Edge environments differ from centralized datacenters in fundamental ways — constrained hardware, intermittent connectivity, and shared infrastructure are the norm rather than the exception. 2k8s addresses these conditions through an abstraction layer purpose-built for distributed edge deployment.

Metadata, workload specifications, and configuration synchronize from the central cluster toward the edge in a manner tolerant of intermittent communication. Once the required state is present locally, workloads continue running without an active connection — with status and updates buffered until connectivity is restored. The platform’s footprint on edge nodes is deliberately kept small, allowing it to operate on minimal hardware and coexist with other software stacks on the same device.

03

Observability that scales with the footprint

2k8s bundles open-source observability tooling into the platform and includes a managed cloud backend for storing and serving the data — highly available, tuned for performance, and built to scale with both the footprint and data volume. There is no separate observability stack to deploy or manage, and no backend to size and maintain.

The observability inventory — sites, nodes, and their topology — synchronizes with the orchestration layer, so the monitoring surface tracks the compute footprint without manual reconciliation. Lightweight agents on each node collect metrics and logs, which are aggregated, normalized, and stored in the managed cloud backend. Platform tenants access their own metrics and logs through standard API endpoints. The entire stack is configured through Kubernetes CRDs alongside the workloads it monitors, including alerting rules with notifications delivered by the platform.

04

Architected for internet scale

The cloud backend behind 2k8s is built to stay available and perform as distributed environments grow. Redundant load balancers keep API and management endpoints continuously available. Clustered services and scalable storage allow core platform components to fail over cleanly and continue scaling as node counts and telemetry volumes increase.

That same architecture supports both small deployments and globally distributed footprints spanning hundreds or thousands of locations, without requiring a different platform configuration or management model at each stage.

05

Native Kubernetes — and the ecosystem that comes with it

2k8s implements the standard Kubernetes API. Workloads packaged for enterprise or public cloud Kubernetes can be deployed across distributed locations with minimal change, and the operational processes around them — deployment pipelines, testing, and promotion workflows — can carry over naturally. So does team expertise: developers and DevOps teams already familiar with Kubernetes can be productive on 2k8s without retraining.

At a more fundamental level, Kubernetes’ declarative model — where operators define the desired state and the system continuously reconciles toward it — is particularly well suited to distributed environments. Configuration can be applied once and propagated across the footprint, with each node converging independently toward the declared intent, recovering automatically from failures without per-site intervention.

A standard API surface also means the broader Kubernetes ecosystem — commercial and open-source software built for Kubernetes — is available as an integration option rather than a custom development effort. Separately, Kubernetes’ own extensibility mechanisms — CRDs, admission webhooks, operators, and configuration propagation — become tools for building advanced distributed functionality on top of the platform, not just for managing it.

06

Adopt incrementally — no forklift required

A distributed platform is only practical if it can be introduced without replacing everything already in place. 2k8s deploys alongside existing infrastructure — bare metal, VMs, or other software stacks — adding a container-based application layer that integrates with current solutions rather than displacing them. New workloads can start on 2k8s while existing ones migrate at whatever pace makes sense.

The platform makes few assumptions about the underlying environment. The edge node can be bare metal — from ARM-based consumer electronics and IoT gateways to dedicated rack-mounted servers — a VM, or a public cloud instance. Where hardware is constrained, 2k8s can share compute resources with other software stacks on the same node. It can also integrate with third-party CRI-compliant container runtimes where required. The platform fits the infrastructure — not the other way around.

07

Routing that follows the workload

2k8s includes a programmable request routing service that directs end-user requests to the most appropriate workload instance across the footprint. Workload sets defined in Kubernetes are exposed through policy-driven service descriptors, giving clients a unified access point while abstracting away the underlying infrastructure.

Because the routing service is Kubernetes-native, it integrates directly with deployed workloads and node taxonomy, continuously discovering nodes, the workloads running on them, and their health through platform metadata. Its programmable interface allows routing logic to be tailored to specific requirements — whether optimizing for proximity, capacity, policy, or custom business rules — creating a dynamic, adaptive layer for managing traffic across distributed environments.

08

Isolation and access control across every layer

2k8s enforces security and isolation through a unified identity and access model that spans the entire platform. A common IAM framework governs access across platform services — from the Kubernetes API and orchestration controls, through observability and operational interfaces, down to the workloads themselves.

At the orchestration level, role-based access control governs permissions across Kubernetes namespaces, giving administrators precise control over who can deploy, manage, or observe resources. At the workload level, tenants and applications are isolated across compute and network boundaries. The same identity and policy model extends to observability services, ensuring that metrics, logs, and alerts are scoped to each tenant’s own workloads. The result is a consistent security framework: one set of policies and identity standards applied across every layer of the platform.

Ready to see what 2you can do for your infrastructure?

Tell us what you’re working on. We’ll show you how the platform fits.