Knative development without Istio
Knative Serving's API allows for swapping Istio for another, possibly leaner, ingress implementation. Gloo and Contour are two alternatives among others.
When I entered the magical world of Knative last year, I gulped after
discovering the wall of YAML composing the project’s release. One
kubectl apply later, my helpless MacBook Air was on
its knees, unable to handle the army of services I was throwing at it, until the first occurrence of this fatal
scheduler event ultimately occurred: “No nodes are available that match all of the following predicates: Insufficient
It turns out Knative itself is relatively lean. The culprit, against my expectations, was Istio and its greedy Pilot component. If you clicked that last link, you will have noticed the gigabyte-range memory requirement per Pilot instance which, ironically, applies to “small” setups. Throw a second instance into the mix, as configured in the default Istio setup, and you end up with half of your average laptop’s memory allocated to a software solution which sole purpose is to back the actual platform you were trying to deploy. Aouch.
A fair question to ask. I may have given Istio a bad rap in my introduction but, truth be told, Istio is currently the most versatile service mesh for Kubernetes. The way Istio manages to deliver a secure and homogeneous service mesh for containerized workloads together with powerful telemetry features on top of almost any protocol is quite magical. Knative was designed from the ground up to be run side by side with Istio, and using Istio provides a guarantee that all the bleeding edge Knative features will work in your environment.
But for development purposes, cost efficiency, speed and operability generally outweighs the benefits provided by a full-fledged service mesh, especially when the focus is not network interoperability or security. At the edge, an API gateway should be sufficient and, within the cluster, Kubernetes’ internal network should have us covered. Let’s find out what Knative truly requires in order to be able to serve north-south traffic.
The runtime contract for Knative Serving defines prerequisites that have to be met by Ingress controllers for enabling inbound traffic to Knative Services. Essentially, a (ingress-)gateway needs to be able to manipulate a few critical HTTP headers and to provide protocol enhancements such as HTTPS termination and HTTP/2 transport with Prior Knowledge.
But most importantly, Knative ingress controllers need to satisfy Knative’s own
Ingress API, which is
not to be confused with Kubernetes’
Ingress resource. Each Knative Service has an associated
defines how traffic should be distributed to the different revisions of this Service. That
Route is used as the basis
for generating a unique
Ingress configuration per Service, which in turn ingress controllers consume and translate to
their own primitives.
To enumerate a few features from the
Ingress spec linked above, implementations are expected to provide name-based
routing, traffic splitting over multiple backends (Service revisions, e.g. for canary deployments), to support injecting
arbitrary headers into incoming requests (e.g.
Knative-Serving-Revision, to let the autoscaler know what revision is
to be woken up), or even to provide different types of visibility (e.g. restrict traffic to clients located inside the
It can be worth mentioning that acquiring the configuration for the protocol selection in Knative Services is currently
a bit more subtle than reading an attribute from the
Ingress configuration, since that information comes directly from
the Pod template declared inside the Service itself. This somewhat inconsistent API should improve in the near future
with the adoption of Kubernetes proposals such as Adding AppProtocol to Services but, for the time
being, gateway implementations must support that little trick as well.
A solid candidate: Gloo
Now back to the initial statement, and my quest for an alternative to Istio in my local development environment. After spending a few hours familiarizing myself with the Knative Serving ecosystem, I stumbled upon a pretty legitimate project called Gloo that satisfies the conditions described above. Gloo is, to quote its authors, a “Kubernetes-native ingress controller, and next-generation API gateway”. In a nutshell, Gloo provides advanced API gateway features similar to Istio, without the service mesh overhead. Music to my ears.
The Gloo project is actually endorsed by Knative. It not only has its own installation instructions featured in the Knative documentation, but is also part of its the end-to-end test suite to ensure things don’t break along the way.
Another aspect that convinced me to give the project a try is its simple monolithic architecture and ease of deployment.
(oh, and also, I promised). Gloo has its own command-line utility,
glooctl (pronounced “gloo-cuddle”, wink
wink), which can install the solution in a single command within seconds. There are no complicated parameters to dig
from the documentation or long Helm charts to navigate, it’s all self-contained. WIN.
$ glooctl install knative --install-knative=false Creating namespace gloo-system... Done. Starting Gloo installation... Gloo was successfully installed!
The command above deploys the Gloo control plane together with two proxies for Knative, one internal and one external,
to satisfy both public and private Knative Services. I set
false because I want to deploy
Knative Serving myself from a development branch, instead of from the latest stable release.
gloo controller is the only component that defines resources requests. I would have appreciated to see some
default values set on the proxies as well but have to intention to stress my system for the time being anyway. Still, I
find the small number of deployed Pods very satisfying after a few months of using Istio in my local environment.
$ kubectl describe node microk8s ... Non-terminated Pods: (6 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- gloo-system discovery 0 (0%) 0 (0%) 0 (0%) 0 (0%) gloo-system gloo 500m (8%) 0 (0%) 256Mi (2%) 0 (0%) gloo-system ingress 0 (0%) 0 (0%) 0 (0%) 0 (0%) gloo-system knative-external-proxy 0 (0%) 0 (0%) 0 (0%) 0 (0%) gloo-system knative-internal-proxy 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system coredns 100m (1%) 0 (0%) 70Mi (0%) 170Mi (1%)
Compared to Istio, the number of custom resources deployed by Gloo is insignificant due to the absence of service mesh features.
$ kubectl get crd NAME authconfigs.enterprise.gloo.solo.io gateways.gateway.solo.io proxies.gloo.solo.io routetables.gateway.solo.io settings.gloo.solo.io upstreamgroups.gloo.solo.io upstreams.gloo.solo.io virtualservices.gateway.solo.io
There are other reasons to find Gloo attractive in my opinion. Unlike the Istio integration in Knative, which relies on
a separate controller to bridge the gaps between the gateway and the platform, the Gloo control plane embeds a
translator that transparently converts Knative
Ingress configurations to Gloo
Proxy objects. Gloo proxies are easy
to reason about in a Knative setup: there is one
Proxy for external traffic (Services exposed to the outside world),
and one for private traffic (Services exposed only to cluster-local clients), very much like Istio
proxy is supported by a distinct Kubernetes
Deployment under the hood, and each proxy can serve ingress traffic for
a number of Knative Services ranging from zero to many.
$ kubectl get proxies.gloo.solo.io -A NAMESPACE NAME gloo-system knative-external-proxy gloo-system knative-internal-proxy
As a demonstration, you will find below a YAML representation of an
Ingress API object corresponding to a simple
“Hello World” Knative Service, and the corresponding external Gloo
Proxy populated on the fly by Gloo. Notice the
different aspects of the
Ingress API discussed previously: HTTP host names, headers, traffic splitting and visibility.
# kubectl get ingresses.networking.internal.knative.dev helloworld-go -o yaml --- apiVersion: networking.internal.knative.dev/v1alpha1 kind: Ingress metadata: name: helloworld-go namespace: default labels: serving.knative.dev/route: helloworld-go serving.knative.dev/routeNamespace: default serving.knative.dev/service: helloworld-go ownerReferences: - apiVersion: serving.knative.dev/v1alpha1 kind: Route name: helloworld-go spec: rules: - hosts: - helloworld-go.default.svc.cluster.local - helloworld-go.default.example.com http: paths: - retries: attempts: 3 perTryTimeout: 10m0s splits: - appendHeaders: Knative-Serving-Namespace: default Knative-Serving-Revision: helloworld-go-zf59x percent: 100 serviceName: helloworld-go-zf59x serviceNamespace: default servicePort: 80 timeout: 10m0s visibility: ExternalIP visibility: ExternalIP status: loadBalancer: ingress: - domainInternal: knative-external-proxy.gloo-system.svc.cluster.local privateLoadBalancer: ingress: - domainInternal: knative-internal-proxy.gloo-system.svc.cluster.local publicLoadBalancer: ingress: - domainInternal: knative-external-proxy.gloo-system.svc.cluster.local
The same attributes can be found in the Gloo
Proxy, just in a different shape:
# kubectl -n gloo-system get proxies.gloo.solo.io knative-external-proxy -o yaml --- apiVersion: gloo.solo.io/v1 kind: Proxy metadata: name: knative-external-proxy namespace: gloo-system labels: created_by: gloo-knative-translator spec: listeners: - name: http bindAddress: '::' bindPort: 80 httpListener: virtualHosts: - name: default.helloworld-go-0 domains: - helloworld-go.default.svc.cluster.local - helloworld-go.default.svc.cluster.local:80 - helloworld-go.default.svc - helloworld-go.default.svc:80 - helloworld-go.default - helloworld-go.default:80 - helloworld-go.default.example.com - helloworld-go.default.example.com:80 routes: - matchers: - regex: .* options: retries: numRetries: 3 perTryTimeout: 600s timeout: 600s routeAction: multi: destinations: - destination: kube: port: 80 ref: name: helloworld-go-zf59x namespace: default options: headerManipulation: requestHeadersToAdd: - header: key: Knative-Serving-Namespace value: default - header: key: Knative-Serving-Revision value: helloworld-go-zf59x weight: 100
Since I am not running Istio in my local cluster, I was able to exclude the following files, pertaining only to Istio, when I deployed Knative Serving from development manifests:
We can validate that our Knative Service can get activated (scale from zero) and receive traffic by sending a HTTP request to the Gloo proxy.
$ kubectl get services.serving.knative.dev NAME URL READY helloworld-go http://helloworld-go.default.example.com True
I did not explicitly request the visibility of this Service to be cluster-local, so ingress traffic is served by the external proxy:
$ kubectl -n gloo-system get svc knative-external-proxy NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) knative-external-proxy LoadBalancer 10.152.183.248 203.0.113.1 80:32646/TCP,443:31851/TCP
The request is handled by the proxy and forwarded to the activator, which dispatches it to the Hello World service as soon as a Pod becomes ready. Everything works the same way as with Istio from a developer’s perspective. Noice!
$ curl http://10.152.183.248 -D- -H 'Host: helloworld-go.default.example.com' HTTP/1.1 200 OK content-length: 20 content-type: text/plain; charset=utf-8 date: Fri, 24 Jan 2020 18:13:28 GMT x-envoy-upstream-service-time: 2 server: envoy Hello Go Sample v1!
As a developer who run multiple platforms on top of Kubernetes, I value lean solutions which deliver just the feature set I need, and let me focus on what matters without diverting my attention to infrastructure-related challenges. I find the integration between Knative and Gloo to work particularly well in this context. A persona like mine will appreciate the choices offered by Knative’s swappable battery approach to ingress management, where it is possible to opt-out of the heavy but battle-tested service mesh whenever it makes sense, without sacrificing any essential core functionality.
I am not the first person to consider such alternative. Matt Moore, one of the fathers of Knative and lead of the Serving layer, recently came up with a custom distribution called mink which delegates the Ingress implementation to Contour, a Kubernetes ingress controller powered by Envoy, the same service proxy that powers Gloo.
Simplicity is a key factor in driving the adoption of an open-source platform like Knative, and I wish a project like Gloo would become the default ingress gateway for Knative in the future, leaving the complexity of Istio to users who have a real need for such advanced network layer.
Did you experiment with an alternative ingress gateway for Knative yourself? Please share your impressions and comment below.
Post picture by Johannes Plenio