MicroK8s
The smallest, fastest, fully upstream Kubernetes for edge, IoT, and developer workstations.

The industry-standard container orchestration platform for automating deployment, scaling, and management of containerized applications.
Kubernetes, often abbreviated as K8s, is a production-grade open-source system for automating deployment, scaling, and management of containerized applications. Originally developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), it has become the definitive operating system for the cloud. Its architecture is built around a declarative model, where users define the desired state of their infrastructure, and Kubernetes' control plane works continuously to maintain that state. By 2026, Kubernetes has evolved to support heterogeneous workloads, including seamless integration of AI/ML GPU partitioning and sovereign cloud requirements. Its core components—the API server, etcd, scheduler, and controller manager—facilitate a highly resilient environment capable of self-healing, horizontal scaling, and zero-downtime rollouts. As organizations shift toward platform engineering, Kubernetes serves as the foundational layer for internal developer platforms (IDPs), enabling granular resource allocation and multi-tenant isolation across hybrid and multi-cloud environments.
Automatically restarts containers that fail, replaces and reschedules containers when nodes die, and kills containers that don't respond to user-defined health checks.
Verified feedback from the global deployment network.
Post queries, share implementation strategies, and help other users.
Adjusts the number of pod replicas based on observed CPU utilization or custom metrics like request per second.
Stores and manages sensitive information (passwords, OAuth tokens, SSH keys) separately from container images.
Uses progressive delivery to update applications or their configurations, with the ability to revert to a previous state if health checks fail.
Automatically mounts a storage system of your choice, such as local storage, public cloud providers (EBS, Azure Disk), or network storage (NFS, Ceph).
Allocation of both IPv4 and IPv6 addresses to Pods and Services.
Advanced scheduling logic that allows users to constrain which nodes a pod can be scheduled on based on labels or workload requirements.
Monolithic applications are difficult to scale and update.
Registry Updated:2/7/2026
Large Language Models require massive burstable compute for inference.
Running applications differently on-prem vs in the cloud.