The Service Mesh Mystery – Part 2

Introduction to Istio

Authored by:

Oren Penso (Twitter: @openso)
Roie Ben-haim (Twitter: @roie9876)

In our previous blog The Service Mesh Mystery – Part 1, we cover the applications architectural change from monoliths to microservices, the concept of service mesh and the new challenges they raised. on this blog, we will focus on the open source projects (Istio and Envoy) to overcome those challenges.

Istio project

Istio is an open platform that allows you to “Connect, secure, control, and observe micro-services “, more reading on the project in a web page: https://istio.io/

Three companies founded the project in 2017:

A quick view from GitHub with details on the project. You can see that there are around 100+ contributors and this project get lots of attractions.

Istio based on Envoy as the Networking Data Plane – you can read more on Envoy in https://www.envoyproxy.io/.

Networking Overview

Service mesh has very clear demarcation points for where the entry and the exit points of the mesh. On the print screen below, the traffic gets into the mesh via a component called the Ingress gateway (which is envoy proxy), traffic originates outside the service mesh go via the public gateway will return via the same ingress gateway.

Service running inside the service mesh (for example Service B) can originate traffic to external services (for example YouTube), We can program the service mesh to handle the way this traffic leaves the service mesh via the Egress gateway.

Istio Resource

 Istio project run inside Kubernetes as Custom Resource Definition – CRD

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/

CRD’s allow Istio to extend the native Kubernetes API and create new resource type inside Kubernetes.

Three different resource types basically define Istio:

The resource types are:

  • Gateway
  • Virtual Service
  • Destination Rule

(There are many more resources, this are just for example)

Each resource type is a unique object in Kubernetes; they are related and connected between one another as shown on the print screen below,

Gateway object is the first one to configure; it contains basic information on which URL the ingress gateway need to listing, what L4 ports open etc. The next resource is Virtual Service which diverts the traffic to a specific Kubernetes service, then the last resource in the chain is the Destination Rule which determines L7 properties like from specific User

The next resource to configure on the chain is the VirtualService. First, we need to decide the traffic that will be sent inside the service mesh, the destination must be a kubernetes service.

The kubernetse service can be unique inside the service mesh, for example, SVC-A run nginx web service and SVC-B runs MongoDB database.

The Ingress Gateway have two options to send the traffic: SVC-A or SVC-B, to configure to which we use the VirtualService CRD object.

The configuration of VirtualService is via criteria of rules. One of them is an FQDN parameter for the request.

For example:

URI= app.corp.local/svc-a will go to SVC-A

URI=app.corp.local/svcb will go to SVC-B

Also in Istio there is a concept of subset for services, that means we can have multiple “copy’s” of the same service running in different versions.

Each version of the service called subset, for example, service SVC-A, can run multiple time with different versions: v1, v2 and v3.

Another optional criteria are the user login to the service mesh will determine the destination for example:

If user codi@corp.local login, then the traffic will go to svc-a

If the login user dev@corp.local them traffic will to svc-b

A Virtual Service must be bound to the gateway and must have one or more hosts that match the hosts specified in a server.

The match could be an exact match or a suffix match with the server’s hosts.

For example, if the server’s hosts specify *.example.com, a VirtualService with hosts dev.example.com or prod.example.com will match.

However, a VirtualService with host example.com or newexample.com will not match.

Destination Rule

At this point, we can manipulate the traffic with policy at L7 before sending the traffic to the final destination service. Example for L7 parameters can be session affinity, Circuit Break, load balance algorithm, max connection, connection time out etc..

A full list of the option can be found:

https://istio.io/docs/reference/config/networking/v1alpha3/destination-rule/

In the figure shown below, we can see an example of the implementation of Circuit Break, The implementation of the Circuit Break done at the destination rule. SVC-A built from two Independent PODs, both of the PODs are part of load balancer pool. Let’s assume that one of the POD starts to get application errors (from 5xx code ranges), based on threshold number 500 errors we can trigger the circuit brake and stop send more traffic to the faulty POD.

Now that we cover the basic CRDs let’s explore some examples

Install Istio On PKS

We have PKS installed on our LAB, and with that, it’s easy to spin up a new k8s cluster to test Istio. The next step is to install Istio on k8s:

Download Istio:

https://istio.io/docs/setup/kubernetes/download/

# curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.1.1 sh –

copy istioctl command line to /usr/local/bin

# cd istio-1.1.1/bin/

# cp istioctl /usr/local/bin/istioctl

Check the version:

#istioctl version

version.BuildInfo{Version:"1.1.1", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"7077232d-4c6c-11e9-813c-0a580a2c0506", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.0-17-g2b13318"}

—————————————————————————————————————-

The link to Istio installation guide:

https://istio.io/docs/setup/kubernetes/install/kubernetes/

—————————————————————————————————————-

Enter to Istio installation directory:

#cd /home/localadmin/istio-1.1.1

# for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

—————————————————————————————————————-

In the output of this command, Istio creates the necessary CRDs to expand the native k8s API.

We explain in the bloc 3 basic CRDs: gateway, virtualservice, destionationrule.

customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "clusterrbacconfigs.rbac.istio.io" created

customresourcedefinition.apiextensions.k8s.io "policies.authentication.istio.io" configured

customresourcedefinition.apiextensions.k8s.io "meshpolicies.authentication.istio.io" configured

customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "quotaspecbindings.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "quotaspecs.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "rules.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "attributemanifests.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "bypasses.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "circonuses.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "deniers.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "fluentds.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "kubernetesenvs.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "listcheckers.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "memquotas.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "noops.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "opas.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "prometheuses.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "rbacs.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "redisquotas.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "signalfxs.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "solarwindses.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "stackdrivers.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "statsds.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "stdios.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "apikeys.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "authorizations.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "checknothings.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "kuberneteses.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "listentries.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "logentries.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "edges.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "metrics.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "quotas.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "reportnothings.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "tracespans.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "rbacconfigs.rbac.istio.io" created

customresourcedefinition.apiextensions.k8s.io "serviceroles.rbac.istio.io" created

customresourcedefinition.apiextensions.k8s.io "servicerolebindings.rbac.istio.io" created

customresourcedefinition.apiextensions.k8s.io "adapters.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "instances.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "templates.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "handlers.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "cloudwatches.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "dogstatsds.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "sidecars.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "zipkins.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "clusterissuers.certmanager.k8s.io" created

customresourcedefinition.apiextensions.k8s.io "issuers.certmanager.k8s.io" created

customresourcedefinition.apiextensions.k8s.io "certificates.certmanager.k8s.io" created

customresourcedefinition.apiextensions.k8s.io "orders.certmanager.k8s.io" created

customresourcedefinition.apiextensions.k8s.io "challenges.certmanager.k8s.io" created

As you can see from the output of this command, there is a lot of CRD’s Istio project creates.

Up to this point, we did not install anything in our k8s cluster, we still have the four default namespaces in the output of k8s cluster

# kubectl get ns

NAME           STATUS    AGE

default        Active    102d

kube-public    Active    102d

kube-system    Active    102d

pks-system     Active    102d

Install Istio with permissive mutual TLS mode:

With permissive mode, all services accept both plain text and mutual TLS traffic.

# kubectl apply -f install/kubernetes/istio-demo.yaml

$kubectl get ns

NAME           STATUS    AGE

default        Active    102d

istio-system   Active    27s

kube-public    Active    102d

kube-system    Active    102d

pks-system     Active    102d

Istio creates a new namespace called Istio-system

In this namespace:

localadmin@ubuntu:~/istio-1.1.1$ kubectl get pod -n istio-system

Istio Demo

In this section, we will demo a basic HTML page application from the following repository: https://github.com/acuto/kubernetes-istio

You can use our forked version in:

https://github.com/roie9876/kubernetes-istio

In this demo, it’s possible to demonstrate a process of canary upgrade, the application in this demo is just a simple web page built from one POD running nginx application.

—————————————————————————————————————-

# kubectl create ns myapp

namespace “myapp” created

—————————————————————————————————————-

To deploy the myapp application in the mesh, we need Istio sidecar injector to auto-inject Envoy containers into myapp  pods, we can control what namespace will have the auto-injector functionality with the label: istio-injection=enabled

—————————————————————————————————————-

# kubectl label namespace myapp istio-injection=enabled

namespace “myapp” labelled

—————————————————————————————————————-

Let’s parse the MYAPP.YAML for details

The YAML contains one k8s service with name myapp.

There are two k8s deployments: myapp-v1 and myapp-v2

apiVersion: v1
kind: Service
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  type: ClusterIP
  ports:
  - port: 80
    name: http
  selector:
    app: myapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
      - name: myapp
        image: roie9876/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myapp-v2

spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
        version: v2

    spec:
      containers:
      - name: myapp
        image: roie9876/myapp:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

myapp-v1 POD will deploy with K8s label version:v1

myapp-v2 POD will deploy with K8s label version:v2

The app and version labels add contextual information to the metrics, and telemetry Istio collects the app and version label:

Even though the labels are just free text, In Istio we have that labels are meaningful.

Each deployment specification should have a distinct app label with a meaningful value.

The app label is also used to add contextual information in distributed tracing.

https://istio.io/docs/setup/kubernetes/prepare/requirements/

deploy myapp.yaml

#kubectl create -f myapp.yaml -n myapp

service “myapp” created

deployment.extensions “myapp-v1” created

deployment.extensions “myapp-v2” created

from kubernetse perspective we have the following topology:

#kubectl get pod -n myapp

So far, we did not deploy any Istio CRD object resource. But as you can see in the output of the command line above, there are 2/2 containers in a READY state for both deployments. Since we deployed the PODs into Istio enable namespace, there is a sidecar container running inside the POD.

So a more accurate status of our application looks like this:

As we can see POD myapp-v1 and POD myapp-v2 container envoy side card proxy.

The next step is to deploy the Istio CRD’s objects:

Deploy Istio config files

Deploy the app-gateway yaml file

#kubectl create -f app-gateway.yaml -n myapp

gateway.networking.istio.io “app-gateway” created

destinationrule.networking.istio.io “myapp” created

virtualservice.networking.istio.io “myapp” created

—————————————————————————————————————-

Verification

#kubectl get gateway -n myapp

NAME          AGE

app-gateway   1h

# kubectl describe gateway app-gateway -n myapp

Name:         app-gateway
Namespace:    myapp
Labels:       
Annotations:  
API Version:  networking.istio.io/v1alpha3
Kind:         Gateway
Metadata:
  Creation Timestamp:  2019-03-31T06:21:18Z
  Generation:          1
  Resource Version:    16162999
  Self Link:           /apis/networking.istio.io/v1alpha3/namespaces/myapp/gateways/app-gateway
  UID:                 32216c98-537d-11e9-a9d2-005056b6ce6b
Spec:
  Selector:
    Istio:  ingressgateway
  Servers:
    Hosts:
      *
    Port:
      Name:      http
      Number:    80
      Protocol:  HTTP
Events:          


# kubectl describe virtualservice myapp -n myapp

Name:         myapp
Namespace:    myapp
Labels:       
Annotations:  
API Version:  networking.istio.io/v1alpha3
Kind:         VirtualService
Metadata:
  Creation Timestamp:  2019-03-31T06:21:18Z
  Generation:          1
  Resource Version:    16163001
  Self Link:           /apis/networking.istio.io/v1alpha3/namespaces/myapp/virtualservices/myapp
  UID:                 322b0209-537d-11e9-a9d2-005056b6ce6b
Spec:
  Gateways:
    app-gateway
  Hosts:
    *
  Http:
    Route:
      Destination:
        Host:    myapp
        Subset:  v1
      Weight:    50
      Destination:
        Host:    myapp
        Subset:  v2
      Weight:    50
Events:

# kubectl get destinationrule  -n myapp

NAME      AGE

myapp     1h

# kubectl describe destinationrule  myapp -n myapp

Name:         myapp
Namespace:    myapp
Labels:       
Annotations:  
API Version:  networking.istio.io/v1alpha3
Kind:         DestinationRule
Metadata:
  Creation Timestamp:  2019-03-31T06:21:18Z
  Generation:          1
  Resource Version:    16163000
  Self Link:           /apis/networking.istio.io/v1alpha3/namespaces/myapp/destinationrules/myapp
  UID:                 32251ab7-537d-11e9-a9d2-005056b6ce6b
Spec:
  Host:  myapp
  Subsets:
    Labels:
      Version:  v1
    Name:       v1
    Labels:
      Version:  v2
    Name:       v2
Events:

Get the Istio ingress gateway IP address

# kubectl get svc istio-ingressgateway -n istio-system

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)       AGE
istio-ingressgateway   LoadBalancer   10.100.200.60   10.172.1.5,10...   80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30709/TCP,15030:31715/TCP,15031:32684/TCP,15032:30524/TCP,15443:31358/TCP,15020:31471/TCP   5h

—————————————————————————————————————-

open the browser to 10.172.1.5 we hit the green POD v2

Refresh the browser we can see we get Blue  POD v1

Generate traffic using thebalance. Sh script. We need to edit the script and add the IP address of the Istio ingress controller (10.172.1.5 in my lab), and the HTTP port is 80.

Adjust the script for your environment:

export INGRESS_HOST=10.172.1.5
export INGRESS_PORT=80
while:; do \
    export GREP_COLOR='1;33'; \
    curl -s $INGRESS_HOST:$INGRESS_PORT | grep --color=always "v1"; \
   export GREP_COLOR='1;36'; \
    curl -s $INGRESS_HOST:$INGRESS_PORT | grep --color=always "v2"; \
    sleep 1; \
done

adding execute permission to the script:

#chmod +x balance.sh

As you can see from the script results its more or less 50% of the traffic go to v1 and 50% of the traffic go to v2.

We can change the traffic percentage by edit the app-gateway.yaml file by changing the percentage in the VirtualService to 0 for v1 and 100 for v2

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - "*"
  gateways:
  - app-gateway
  http:
    - route:
      - destination:
          host: myapp
          subset: v1
        weight: 0
      - destination:
          host: myapp
          subset: v2
        weight: 100

Run the script agin:

We can see all the traffic goes to v2 POD.

Summary

The best way to manage the aspect of network and infrastructure for the application layer is to create a service mesh based on technologies that will allow you to leverage the basic architecture and minimize the risk and challenges. Istio and Envoy can help overcome most of the challenges L7 microservices networking and infrastructure is raising.

With Istio the rolling upgrade of application versions is easy and controlled so you can decide the amount of traffic to route to the version and even split the traffic based on L7 parameters like, user name, browser, OS etc.

Istio will also secure the traffic with mTLS between the microservices and help monitor and trace the events with service graph and Jager.

The basic components of Istio is similar to the construct of traditional network, that’s allows you to be flexible with application infrastructure management and leverage the architecture.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.