TNS
VOXPOP
You’re most productive when…
A recent TNS post discussed the factors that make developers productive. You code best when:
The work is interesting to me.
0%
I get lots of uninterrupted work time.
0%
I am well-supported by a good toolset.
0%
I understand the entire code base.
0%
All of the above.
0%
I am equally productive all the time.
0%
Containers / Kubernetes / Software Development

Extend Kubernetes 1.7 with Custom Resources

Jul 26th, 2017 8:00am by
Featued image for: Extend Kubernetes 1.7 with Custom Resources
Title image of the modularly constructed Seventh Street Bridge in Ft. Worth, Texas courtesy Max Pixel, licensed under Creative Commons Zero.

Yaron Haviv
As the CTO and founder of Iguazio, Yaron is a serial entrepreneur who has deep technological experience in the fields of big data, cloud, storage and networking. Prior to Iguazio, Haviv was the Vice President of Datacenter Solutions at Mellanox, where he led technology innovation, software development and solution integrations. Haviv was the key driver of open source initiatives and new solutions with leading database and storage vendors, enterprise organizations, cloud and Web 2.0 customers. Before Mellanox, Haviv was the CTO and Vice President of R&D at Voltaire, a high performance, computing, IO and networking company. Haviv often speaks at big data and cloud technology events. He tweets as @yaronhaviv.

Let’s assume you want to build a clustered application or a software as a service offering. Before you start to write one single line of application code, you must address an overflowing array of architectural issues, including security, multitenancy, API gateways, CLI, configuration management, and logging.

What if you could just leverage all that infrastructure from Kubernetes, save yourself few man years in development, and focus on implementing your unique service?

The latest Kubernetes 1.7 adds an important feature called CustomResourceDefinitions (CRD), which enables plugging in your own managed object and application as if it were a native Kubernetes component. This way, you can leverage Kubernetes CLI, API services, security and cluster management frameworks without modifying Kubernetes or knowing its internals. We use it here at Iguazio to seamlessly integrate Kubernetes with our new real-time “serverless” project and data platform objects.

Part of what I love about Kubernetes is its extensibility. Now you can build your custom application resources, and use Kubernetes RBAC and authentication mechanisms to provide security, access control, authentication, and multitenancy. These custom resources will be stored in the integrated etcd repository with replication and proper lifecycle management. They will also leverage all the built-in cluster management features that come standard with Kubernetes.

Understanding the Kubernetes Management Model

Diagram for The New Stack by Gabriel HD.

Kubernetes has API services which accept calls from CLI or API clients.  These calls are authenticated, authorized and validated. Resources are stored or fetched from a repository (etcd). Various controllers watch for etcd updates and seek to reconcile the desired state or configuration (the resource spec) with the actual state. This control loop approach is very robust and can handle a variety of failure scenarios and elastic scaling.

Kubernetes has many built-in resources. Some are hierarchical deployments such as, replica sets, pods, and containers. They have schema, are stored in the repository, and have unique API endpoints. Some resources have controllers — for example, a deployment controller will accept the pod spec and desired number of replicas, automatically deploy pods based on the spec, and make sure the number of active replicas match the spec.

For any new resource, you follow the same methodology:

  1. Define the resource schema;
  2. Register the resource with the API service and provide proper APIs;
  3. Implement a controller which will watch for resource spec changes and make sure your application complies with the desired state.

CRD Overview

Kubernetes’ CRD is an evolution of an older feature called Third Party Resources. TPR suffered from a bunch of limitations that were addressed with CRD starting with version 1.7, most notably its inability to validate new/updated resources (which will be stored, as is in etcd, even if they are malformed). There are some other early efforts to extend Kubernetes via custom API services (with or without CRD), but they too are still a work in progress.

CRD’s full source code is available here. Since some of its dependent libraries (like k8io api, apiextensions and client-go) are still evolving, we “vendored” all of those (under the /vendor directory on GitHub), enabling you to start from a working example. The code you’re about to see was tested with Kubernetes 1.7.0, and is based on the apiextensions-apiserver example.

The code is divided into these parts:

  • crd — CRD class and initialization logic.
  • client — custom client library to access our objects (get, set, del, etc.).
  • kube-crd — main logic for connecting to Kubernetes, initializing, and using CRD.

We will start by defining our CRD class and registering it. Once that’s done, we can address it through the Kubernetes CLI (kubectl). To do that, we create an extended client interface that is aware of the new schema.

Creating a CRD

We will define a new type of object called Example, along with an instance of its list ExampleList. Note that we must embed some base Kubernetes metadata classes. To conform to the standard Kubernetes style, you’ll see in the code below that we declare the object structure with these components:

  • Metadata (e.g., TypeMeta, ObjectMeta) — standard Kubernetes properties like name, namespace, labels, etc.
  • Spec — the desired resource configuration
  • Status — usually filled by the controller in response to Spec updates
// Definition of our CRD Example class
type Example struct {
      meta_v1.TypeMeta   <code>json:&quot;,inline&quot;</code>
      meta_v1.ObjectMeta <code>json:&quot;metadata&quot;</code>
      Spec               ExampleSpec   <code>json:&quot;spec&quot;</code>
      Status             ExampleStatus <code>json:&quot;status,omitempty&quot;</code>
}

type ExampleSpec struct {
      Foo string <code>json:&quot;foo&quot;</code>
      Bar bool   <code>json:&quot;bar&quot;</code>
      Baz int    <code>json:&quot;baz,omitempty&quot;</code>
}

type ExampleStatus struct {
      State   string <code>json:&quot;state,omitempty&quot;</code>
      Message string <code>json:&quot;message,omitempty&quot;</code>
}

type ExampleList struct {
      meta_v1.TypeMeta <code>json:&quot;,inline&quot;</code>
      meta_v1.ListMeta <code>json:&quot;metadata&quot;</code>
      Items            []Example <code>json:&quot;items&quot;</code>
}

Now that we’ve created a CRD object, we can write a function which registers that new resource type. The CRD name (FullCRDName) and Plural define where it fits in the hierarchy, and how it will be referenced in the CLI or API. Group and Version define API endpoints. We added logic to account for possible error:

const (
      CRDPlural      string = "examples"
      CRDGroup       string = "myorg.io"
      CRDVersion     string = "v1"
      FullCRDName    string = CRDPlural + "." + CRDGroup
)

// Create the CRD resource, ignore error if it already exists
func CreateCRD(clientset apiextcs.Interface) error {
      crd := &apiextv1beta1.CustomResourceDefinition{
             ObjectMeta: meta_v1.ObjectMeta{Name: FullCRDName},
             Spec: apiextv1beta1.CustomResourceDefinitionSpec{
                    Group:   CRDGroup,
                    Version: CRDVersion,
                    Scope:   apiextv1beta1.NamespaceScoped,
                    Names:   apiextv1beta1.CustomResourceDefinitionNames{
                           Plural: CRDPlural,
                           Kind:   reflect.TypeOf(Example{}).Name(),
                    },
             },
      }

      _, err := clientset.ApiextensionsV1beta1().CustomResourceDefinitions().Create(crd)
      if err != nil && apierrors.IsAlreadyExists(err) {
             return nil
      }
      return err
     
      // Note the original apiextensions example adds logic to wait for
      //  creation and exception handling
}

In the final step to create a CRD, we write a function to create a custom client, which is aware of our new resource schema. We will use this function later in the main section.

// Create a  Rest client with the new CRD Schema
var SchemeGroupVersion = schema.GroupVersion{Group: CRDGroup, Version: CRDVersion}

func addKnownTypes(scheme *runtime.Scheme) error {
      scheme.AddKnownTypes(SchemeGroupVersion,
             &Example{},
             &ExampleList{},
      )
      meta_v1.AddToGroupVersion(scheme, SchemeGroupVersion)
      return nil
}

func NewClient(cfg *rest.Config) (*rest.RESTClient, *runtime.Scheme, error) {
      scheme := runtime.NewScheme()
      SchemeBuilder := runtime.NewSchemeBuilder(addKnownTypes)
      if err := SchemeBuilder.AddToScheme(scheme); err != nil {
             return nil, nil, err
      }
      config := *cfg
      config.GroupVersion = &SchemeGroupVersion
      config.APIPath = "/apis"
      config.ContentType = runtime.ContentTypeJSON
      config.NegotiatedSerializer = serializer.DirectCodecFactory{
             CodecFactory: serializer.NewCodecFactory(scheme)}

      client, err := rest.RESTClientFor(&config)
      if err != nil {
             return nil, nil, err
      }
      return client, scheme, nil
}

Building a Custom Client Library

Once we’ve created the CRD, we can just access it from the CLI. To access it from the Go API — in order to build controllers or custom functionality — we need to create the set of CRUD functions for accessing our objects. For this object, these functions are implemented as Create, Update, Delete, Get, and List. All five functions use the REST client, build the relevant request, and deserialize response(s).

// This file implement the (CRUD) client methods we need to access our TPR object

func CrdClient(cl *rest.RESTClient, namespace string) *crdclient {
      return &crdclient{cl: cl, ns: namespace, plural: crd.CRDPlural}
}

type crdclient struct {
      cl     *rest.RESTClient
      ns     string
      plural string
}

func (f *crdclient) Create(obj *crd.Example) (*crd.Example, error) {
      var result crd.Example
      err := f.cl.Post().
             Namespace(f.ns).Resource(f.plural).
             Body(obj).Do().Into(&result)
      return &result, err
}

func (f *crdclient) Update(obj *crd.Example) (*crd.Example, error) {
      var result crd.Example
      err := f.cl.Put().
             Namespace(f.ns).Resource(f.plural).
             Body(obj).Do().Into(&result)
      return &result, err
}

func (f *crdclient) Delete(name string, options *meta_v1.DeleteOptions) error {
      return f.cl.Delete().
             Namespace(f.ns).Resource(f.plural).
             Name(name).Body(options).Do().
             Error()
}

func (f *crdclient) Get(name string) (*crd.Example, error) {
      var result crd.Example
      err := f.cl.Get().
             Namespace(f.ns).Resource(f.plural).
             Name(name).Do().Into(&result)
      return &result, err
}

func (f *crdclient) List() (*crd.ExampleList, error) {
      var result crd.ExampleList
      err := f.cl.Get().
             Namespace(f.ns).Resource(f.plural).
             Do().Into(&result)
      return &result, err
}

// Create a new List watch for our TPR
func (f *crdclient) NewListWatch() *cache.ListWatch {
      return cache.NewListWatchFromClient(f.cl, f.plural, f.ns, fields.Everything())
}

Note the last function NewListWatch() defines a listener (watch) for us to use when building an event-driven controller, as per the example below.

Using Our CRD (main)

Now that everything’s set for us to use our new CRD, we will implement a client which:

  1. Connects to the Kubernetes cluster.
  2. Creates the new CRD if it doesn’t exist.
  3. Creates a new custom client.
  4. Creates a new Example object using the client library we created.
  5. Creates a controller that listens to events associated with new resources.

The first part will use the Kube config file, which holds information about our cluster (e.g., IP credentials). It can usually be found in /etc/kubernetes/admin.conf. You may also find an optional in-cluster config (when your code runs in a pod) in the full code.

This next component creates a clientset from the config. The clientset allows us to conduct operations against all the built-in resources. We will have to create another clientset for our Example CRD resources.

kubeconf := "admin.conf" // Full path to Kube config
config, err := GetClientConfig(kubeconf)
if err != nil {
      panic(err.Error())
}

// create clientset and create our CRD, this only need to run once
clientset, err := apiextcs.NewForConfig(config)
if err != nil {
      panic(err.Error())
}

// note: if the CRD exist our CreateCRD function is set to exit without an error
err = crd.CreateCRD(clientset)
if err != nil {
      panic(err)
}

Now let’s try out the CLI to see if our CRD was created correctly.

$ kubectl get crd
NAME                KIND
examples.myorg.io   CustomResourceDefinition.v1beta1.apiextensions.k8s.io

Next, we will create a custom clientset and use it to create an Example resource:

 
// Create a new clientset which include our CRD schema
crdcs, _, err := crd.NewClient(config)
if err != nil {
      panic(err)
}

// Create a CRD client interface
crdclient := client.CrdClient(crdcs, "default")

// Create a new Example object and write to k8s
example := &amp;amp;crd.Example{
      ObjectMeta: meta_v1.ObjectMeta{
             Name:   "example123",
             Labels: map[string]string{"mylabel": "test"},
      },
      Spec: crd.ExampleSpec{
             Foo: "example-text",
             Bar: true,
      },
      Status: crd.ExampleStatus{
             State:   "created",
             Message: "Created, not processed yet",
      },
}

result, err := crdclient.Create(example)
if err == nil {
      fmt.Printf("CREATED: %#v\n", result)
} else if apierrors.IsAlreadyExists(err) {
      fmt.Printf("ALREADY EXISTS: %#v\n", result)
} else {
      panic(err)
}

Now we can see whether there are Example resources in kubectl. Note how the name, group, and version were used to build the API endpoints:

$ kubectl get examples -o yaml
apiVersion: v1
items:
- apiVersion: myorg.io/v1
  kind: Example
  metadata:
    clusterName: ""
    creationTimestamp: 2017-07-09T17:59:22Z
    deletionGracePeriodSeconds: null
    deletionTimestamp: null
    labels:
      mylabel: test
    name: example123
    namespace: default
    resourceVersion: "364838"
    selfLink: /apis/myorg.io/v1/namespaces/default/examples/example123
    uid: 56415ab7-64d0-11e7-a07f-0e764b57bad0
  spec:
    bar: true
    foo: example-text
  status:
    message: Created, not processed yet
    state: created
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Now let’s build a small controller. We create a cache informer which will invoke our Add, Delete, and Update functions every time there is a change in our resources. Just replace the Printf method with a call to some custom logic for your code:

// Example Controller
// Watch for changes in Example objects and fire Add, Delete, Update callbacks
_, controller := cache.NewInformer(
      crdclient.NewListWatch(),
      &amp;crd.Example{},
      time.Minute*10,
      cache.ResourceEventHandlerFuncs{
             AddFunc: func(obj interface{}) {
                    fmt.Printf("add: %s \n", obj)
             },
             DeleteFunc: func(obj interface{}) {
                    fmt.Printf("delete: %s \n", obj)
             },
             UpdateFunc: func(oldObj, newObj interface{}) {
                    fmt.Printf("Update old: %s \n      New: %s\n", oldObj, newObj)
             },
      },
)

stop := make(chan struct{})
go controller.Run(stop)

// Wait forever
select {}

GitHub has a page that shows you how to build a Kubernetes controller.

And that’s it! This was a step-by-step walkthrough for how you can extend Kubernetes, and handle your own resources under the same Kubernetes database, API, and authentication framework. While these features may not yet be fully baked, they do highlight the great power of Kubernetes as an open platform with proper emphasis on layering and integration.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.