Kubernetes Deployment

This article covers the requirements and administration procedures on how to deploy eSign while using Kubernetes.

This type of deployment offers several advantages over bare metal deployments as, excluding initial Docker and Kubernetes setup, the deployment, scaling, administration and tear-down processes are vastly improved and streamlined. On the other hand, the existence of another layer between services and hardware means that there is a slight performance overhead.

Requirements

Hardware Requirements

Minimum hardware requirements to satisfy at least a High Availability (HA) deployment are as follows:

  • 2 Host nodes with 4 cores each and at least 16GB of RAM;

  • At least 500Gb of SSD storage per host node;

  • Node interconnect should be at least Gigabit, preferably 10 Gigabit;

  • Swap file or partition disabled (as per Docker requirement)

These requirements can either be backed by physical machines (i.e. on-premises Docker Enterprise deployment) or using virtual machines (i.e. managed Kubernetes instances on Azure, AWS, Openshift, etc).

Environment Requirements

This section describes environment setup. I.e., required Operating System and software version requirements.

To setup the environment the following requirements must be fulfilled:

  • Kubernetes (commonly stylized as k8s) version 1.18+ (validated against 1.18.x and 1.20.x).

  • Helm (k8s correspondent version) validated against versions 2.16.x and 3.5.x

  • Docker (k8s correspondent version);

  • Linux as a host operating system. Windows as a host is explicitly not supported. It must support as a minimum the required docker version for k8s.

    • Red Hat Enterprise Linux version 7.0+ based distributions (i.e. RHEL 7+, CentOS 7+, Oracle Linux 7+);

    • SUSE Linux Enterprise based distributions version 12.2+/15+;

    • Debian Linux based distributions version 9+ (Strech);

    • Ubuntu Server based distributions version LTS starting with 16.04 (i.e. 18.04, 20.04, etc.).

If necessary, see corresponding upstream project documentation.

The use of non-server variants of Linux should allow the solution to work as expected but is not officially supported.
Having SELinux enabled may require additional configurations which are out of the scope of the support provided by the solution. Please check with the corresponding Linux distribution documentation.

Sizing considerations

This section describes component scaling considerations, i.e. what should scale and when and expected loads for sizing.

In the context of a Kubernetes cluster deployment there are two kinds of scaling: host scaling and component scaling. Host scaling refers to the number of host nodes (or Virtual Machines) backing the cluster and Component scaling refers to component (or container) scaling.

Host scaling should be executed when the backing nodes start experiencing either CPU, memory or I/O pressure, in which case a scale up operation should be considered and, conversely, when the nodes are using very few resources of the cluster a scale down operation may be considered.

Component scaling should be executed, or configured to execute automatically, for scale up operations when there is enough available host capacity and an increase on demand is expected. On the other hand, on low demand situations a scale down operation can be executed to free up resources to allow a host scale down.

The components that should scale are usually the Application Layer. Other components, like the relational database scaling should be considered only according to use case specifications.

Installation

Install eSign using Helm charts

Helm is a command-line interface (CLI) package manager. Helm uses a packaging format called charts. Where a chart is a collection of YAML files that describe a related set of Kubernetes resources.

Helm installation using a repository

helm add repo https://url_to_your_repository
helm install --name CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign/VERSION

Helm installation using helm chart file

# For helm versions 2.X
helm install --name CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign_version.tar.gz

# For helm versions 3.X
helm install CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign_version.tar.gz

Helm upgrade

In case an existing eSign Helm chart is already deployed in the K8s cluster you can either remove the existing eSign application or by using the native capabilities of Helm CLI to upgrade its associated resources.

Helm upgrade command
# For helm versions 2.X
helm upgrade --name CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign_version.tar.gz

# For helm versions 3.X
helm upgrade CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign_version.tar.gz

Custom configurations

Since Helm Charts are used to install eSign, most of the chart definition are available to customization. This customization can be achieved using a user-supplied values file that will override the default values or using the --set parameters available in the Helm CLI.

Let’s take as example the simple configuration of changing the number of replicas of eSign Core deployment to 2.

Using the set parameters functionality the helm install command would be represented as listed bellow:

Customization using set parameters (Helm 3.X)
helm install CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign_version.tar.gz --set esigncore.app.stateless.replicaCount=2

In case user-supplied strategy is chosen the following should be used:

Customization using user-supplied values
helm install CUSTOM_NAME --namespace CUSTOM_NAMESPACE esign_version.tar.gz --values custom_values.yaml

Where the custom_values.yaml file would contain the following:

esigncore:
  app:
    stateless:
      replicaCount: 2

Custom Private Registry

eSign docker images are stored in a private registry, in order to download this docker images in the kubernetes cluster a image pull secret is created with the respective configuration and respective user/password.

However, for security reasons some clients may choose to use their own private registry. If so, in order to configure eSign to use the custom private registry, it is necessary to update the following chart structure.

esigncore:
  image:
    name: CUSTOM_ESIGN_CORE_IMAGE_NAME
    repository: CUSTOM_ESIGN_CORE_IMAGE_REPOSITORY
    tag: CUSTOM_ESIGN_CORE_IMAGE_VERSION
    pullPolicy: CUSTOM_ESIGN_CORE_PULL_POLICY
Although it is possible to use your own custom registry to hold eSign docker images, these registries must be compliant with eSign license.

Pull Secrets

If the you decide to change the pull secret information, it is also possible, and the respective secret information must be provided

esigncore:
  secret:
    container:
      name: CUSTOM_ESIGN_REGISTRY_NAME
      user: CUSTOM_ESIGN_REGISTRY_USER
      password: CUSTOM_ESIGN_REGISTRY_PASSWORD
      email: CUSTOM_ESIGN_REGISTRY_EMAIL

In case the registry is only accessible to the cluster and does not use/require any credentials, or an esign is supposed to use an existing secret, it is necessary to either disable the image pull secret functionality or indicate to the chart to utilize an existing secret, otherwise possible authentication errors may occur, this is also achievable but adding the following properties in the custom_values file.

esigncore:
  imagePullSecrets:
  name: CUSTOM_ESIGN_SECRET #name of secret to be used
  create: false # false if disable default secret creation
  enabled: true # true if secret must be used, use false if objective is to completly disable usage of pulling secrets,

Custom database

eSign helm chart per default install and embedded H2 database.
This must not be configured in real environments, so you must use other database of your choice, specially for production environments.

In order to connect to a custom database, it is necessary to provide to eSign the necessary hibernate configuration to connect to this database (Refer to the Hibernate configuration section).

Bellow it is presented an example of this user-supplied values.yaml file.

Since you are using a custom database instance it is necessary to create the respective database, owner of the custom instance of the database. Please use the latest database scripts to create/migrate this structures.

JDBC Drivers

Please check the JDBC drivers supported by eSign.

Init container

Kubernetes init containers run in the same Pod as the main application container, though with a separate life cycle. The pattern is often used to initialize a state or configuration for the application running in the main container.

The same is applied in this solution, eSign init container as the purpose to verify if a custom database is running as is accepting connections.

Since as seen above it is possible to change repository images and database configurations, init container is also configurable to follow applied customizations.

Activate Init container

Since running eSign database as a service or on any other pod on the cluster, it might find useful to validate if the database that eSign is going to use is deployed, running and expecting connections, for this scenario it is possible to enabled the init container.

esigncore:
  init:
    wait:
      enabled: true

Configure init container eSign database

If custom database is defined and the service to connect to this database is not the one defined by default

As such it is necessary to the helm chart the host and port, of the exposed tcp connection to the database.

init:
  wait:
    enabled: true
    database:
      host: CUSTOM_DATABASE_HOST_URL
      port: CUSTOM_DATABASE_HOST_PORT

Configure init container repository

If custom repositories are enabled the same can be applied for the init container

esigncore:
  init:
    wait:
      enabled: true # To activate init container, default is false
      image:
        name: CUSTOM_INIT_CONTAINER_IMAGE_NAME
        repository: CUSTOM_INIT_CONTAINER_IMAGE_REPOSITORY
        tag: CUSTOM_INIT_CONTAINER_IMAGE_VERSION
Image pull secret used for the init container is the same for the main container, so when updating the main image pull secret as defined in section, it will also be used for this repository. In other words, they must be in the same repository.

Security Context

A Security Context defines privilege and access control settings for a Pod or Container.

In Kubernetes hosts where SELinux or AppArmor is installed it may be required to define the for each container what permissions and users needed.

In order to configure each container security context the following fields may be added to the user-supplied values file. Helm template will copy all key-value pairs bellow the securityContext and podSecurityContext and copy them to the respective deployment spec.

esigncore:
  securityContext: {}
  podSecurityContext: {}
All key-value pairs will be copied as-is. Meaning the SecurityContext key-pairs must be valid and recognizable by the Kubernetes API.

Node Selector

A node selector specifies a map of key-value pairs. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node.

In order to do this for each Product Catalogue application the nodeSelector must be enabled, and the respective key-value pairs should be added.

esigncore:
  nodeSelector: {}
  tolerations: {}

Container resources

When Kubernetes schedules a Pod, containers require enough resources (CPU and RAM) to run. If no resources are defined Pods will consume all resources available to the node and may interfere with other running applications.

Resource requests and limits are used to control the resources used by the pod. Resource requests are what the container is guaranteed to get in order to execute, if the available resources are not available the pod is not scheduled. The resource limits, is applied in order to guarantee that a pod does not cross a certain threshold.

esigncore:
  resources:
    enabled: true
    requests:
      memory: 2Gi
      cpu: 1000m
    limits:
      memory: 4Gi
      cpu: 2000m

Probes

eSign implements two container probes, the liveness and readiness probes.

The LivenessProbe, iIndicates whether the Container is still running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is always Success.

While the ReadinessProbe, indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.

The default configuration for the probes can be edited within the following helm structure

esigncore:
  probes:
    authorization:
      enabled: true # Disables request authentication
      header: Authorization # Header name, if custom authorization module is defined this can be updated as well
      token: "Basic YWRtaW46YWRtaW4" #Update token with user:password information in base64
    liveness:
      failureThreshold: 5
      initialDelaySeconds: 120
      path: eSignServer/rest/v1/monitoring/health # Default health endpoint
      periodSeconds: 30
      port: 8080 # Default application port
      successThreshold: 1
      timeoutSeconds: 25
    readiness:
      failureThreshold: 5
      initialDelaySeconds: 30
      path: eSignServer/rest/v1/monitoring/health # Default health endpoint
      periodSeconds: 30
      port: 8080 # Default application port
      successThreshold: 1
      timeoutSeconds: 15

Configure eSign application

The next set of configurations are application base and exclusive of eSign Core chart. The following configurations can be achieved by manipulating the default configmap or by creating a custom configmap with custom configuration. It is however, mandatory that this configmaps exist on the cluster and are correctly associated with eSign Core chart in order to guarantee the proper functioning of the application.

Logging level

eSign core uses log4j configuration to define logging level for the application. If to triage any issue with the application is necessary to change the logging level this can be achieved editing the esign-cm-configs configmap.

In the data property, you can defy a log4j2.xml file, that will replace the default log4j2 configuration. For log4j2.xml configuration examples please check the official documentation

Additionally, if it is necessary to make this configurations permanent. It is possible to disable the default configmap creation and make eSign Core chart use a custom defined configmap.

esigncore:
  configmaps:
      configs:
        name: CUSTOM_CONFIGMAP
        create: false

eSign config

It is also possible to configure esign.config file in kubernetes, editing the esign-cm-configs configmap.

For more information about the contents and structure of esign.config, see here.

Additionally, if it is necessary to make this configurations permanent. It is possible to disable the default configmap creation and make eSign Core chart use a custom defined configmap.

esigncore:
  configmaps:
      configs:
        name: CUSTOM_CONFIGMAP
        create: false

License

eSign ships with a license-free demo mode for evaluation and early development purposes. Though ideal for a quick start and exploration, the demo mode limits some features and cannot be used in productive environments.

For Kubernetes deploy Licenses, there will be two possibilities to configure the users license. The first option is to enable the default license secret and pass your license content in its value, as such:

esigncore:
  secret:
      license:
        enabled: true
        create: true
        data:
          license: <<base64 string with license content >>

The second possibility is to create a custom secret, and provide its reference in the eSign core helm chart.

esigncore:
  secret:
      license:
        name: CUSTOM_LICENSE_SECRET_NAME
        enabled: true
        create: false

The content of this secret mus contain a data object, with a property with the name license.lic

Example of license secret structure
data:
   license.lic: <<base64 string with license content >>

Signing certificate

It is vital that document and signatures carry your institution’s digital certificate, as this is the one guarantee that the document was signed within your organization. By default eSign signs the documents with a demo signing certificate, below are the required steps to configured eSign to use your institution’s certificate.

Just as defined for the licenses, this signing certificate can be enabled by having a custom secret on the cluster and associate the same with the eSign core helm chart.

esigncore:
  secret:
      license:
        name: CUSTOM_CERTIFICATE_SECRET_NAME
        enabled: true
        create: false

The content of this secret mus contain a data object, with a property with the name esign.integrity.p12

Example of license secret structure
data:
   esign.integrity.p12: <<base64 string with p12 file content >>

Or passing the certificate content directly in the value of the secret.

esigncore:
  secret:
      license:
        enabled: true
        create: false
        data:
          certificate: <<base64 string with p12 file content >>

Languages

To configure language packages it is necessary to configure the esign-cm-langs configmap, or providing the eSign core helm chart the name of a custom language configuration.

To configure the languages it is necessary to follow a file name pattern, where the first part is to describe if the language configuration is to be set on the client or the server, follow by an "_" and the name of the file. i.e : "l10n.client_esign.en-US.properties"

Example of Data structure for esign-cm-langs configmap
data:
   l10n.client_esign.en-US.properties: |-
      ...
   l10n.server_language_en.properties: |-
      ...
esigncore:
  configmaps:
      lang:
        enabled: true
        name: CUSTOM_LANGUAGE_CONFIGMAP

Ingress

eSign helm charts do not install any ingress out-of-the-box. Which means, in case esign should be exposed to the internet, that for each environment/cluster it will be necessary to a route (openshift) or ingress rules must be created.

This ingress/routes should point to client-esigncore service in port 8080.

Scaling

Scaling the application can be achieved by updating the number of running replicas, either by updating the default value of the number of replicas.

esigncore:
  app:
    stateless:
      replicaCount: N

Or by manually update the number of running instances.

# scaling stateless replicas
$ kubectl scale deploy esigncore --replicas=N

# scaling statefull replicas
$ kubectl scale sts esigncore --replicas=N

Sticky sessions

In order for a multi-instance eSign to function correctly, the configured load-balancer needs to have sticky-sessions (or session-affinity) enabled. This is needed in order to keep a given session on the same instance, otherwise, eSign functionality may be corrupted.

Auto Scaling

Horizontal Pod Autoscaler (HPA) is the kubernetes capability to automatically scale the number of Pods of a StatefulSet or Deployment.

This automation can be configured taking in consideration the metrics of the resources that are being used by the Pods. By default, Product Catalogue install only one replica of Product Catalogue Core application and Postgres.

Follow official documentation to apply HPA configurations.