Kubernetes Deployment
To deploy the solution using Kubernetes, you will require a compatible platform, for example:
-
Amazon Web Services (AWS) Elastic Kubernetes Service (EKS)
-
Microsoft Azure Kubernetes Service (AKS)
-
Red Hat OpenShift
-
MiniKube
For deployment you may require permissions to create Cluster Roles. Support for this is out of the scope of the product. |
Support for setting up Ingress providers (e.g. NGINX ingress) and HTTPS certificates (e.g. cert-manager/kcert to provide Lets encrypt certificates) is out of the scope of the product. |
Before we start
Remember that eSign Server requires a database. Ensure that a database instance is already available before starting this guide. (eSign Server will be able to deploy the required database table, and other resources, on the first run) |
To be able to use Helm you will need to install at least Helm 3.8.0 and you may need to enable the OCI support flag.
export HELM_EXPERIMENTAL_OCI=1
You will also need to login with your registry credentials (i.e. the same ones you use to connect to the Docker registry):
helm registry login -u <USERNAME> -p <PASSWORD> esigncelfocus.azurecr.io/helm
Deployment
Retrieve helm chart
To install the eSign Server chart just issue the following helm command:
helm upgrade --install oci://esigncelfocus.azurecr.io/helm/esignserver -n <namespace> (1)
1 | If you are not installing to a specific namespace, either omit the "-n <namespace>" part or replace <namespace> with "default" (without commas) |
This will install the latest version of the eSign Portal container using internal H2 databases and, with the default values, is not suitable for anything other that local development.
By default both deployments and statefulsets will be scaled down to 0 replicas. You will need to scale up either one or the other to the required number of replicas. |
Configure your deployment
This chart supports changing values using the normal helm procedures:
helm upgrade --install oci://esigncelfocus.azurecr.io/helm/esignserver --set name=value (1)
1 | You can set multiple key/pair values by using multiple --set arguments, the right-most value for a given key will be the version used. |
or
helm upgrade --install oci://esigncelfocus.azurecr.io/helm/esignserver -f overrides.yaml (1)
1 | You can pass multiple override files by using multiple -f/--values arguments, the right-most set (which contains the same key) will be the version used. |
Custom configurations
Since Helm Charts are used to install eSign, most of the chart definition are available to customization.
Custom Environment Variables
eSign helm chart allows you to add custom environment variables.
These custom environment variables can help you create configurations in eSign config file
that can vary between environments. See: Environment Configuration.
Custom environment variables can be kept in plain text or within kubernetes secrets.
Plain text variables
Below is an example of how to add two custom environment variables:
esigncore:
env:
custom:
- name: MY_VAR_1
value: my_value_1
- name: MY_VAR_2
value: my_value_2
Secret variables
This helm chart creates a secret called esign-server-credentials
where you can add your secret entries.
If the values to pass as environment variables are sensitive (for instance a password), you can use a special configuration of this helm chart that will automatically add a new entry to the credentials secret and refer to it.
Below is an example of how to custom environment variables stored in a secret:
esigncore:
env:
customFromCredentialSecret:
- name: MY_SECRET_VAR_1 (1)
key: mySecretKey (2)
value: mySecretValue (3)
1 | The name of the environment variable |
2 | The name of the key added to the esign-server-credentials secret |
3 | The value stored in the secret with the key mentioned above |
Custom Private Registry
eSign docker images are stored in a private registry, in order to download this docker images in the kubernetes cluster a image pull secret is created with the respective configuration and respective user/password.
However, for security reasons some clients may choose to use their own private registry. If so, in order to configure eSign to use the custom private registry, it is necessary to update the following chart structure.
esigncore:
image:
name: CUSTOM_ESIGN_CORE_IMAGE_NAME
repository: CUSTOM_ESIGN_CORE_IMAGE_REPOSITORY
tag: CUSTOM_ESIGN_CORE_IMAGE_VERSION
pullPolicy: CUSTOM_ESIGN_CORE_PULL_POLICY
Although it is possible to use your own custom registry to hold eSign docker images, these registries must be compliant with eSign license. |
Pull Secrets
If the you decide to change the pull secret information, it is also possible, and the respective secret information must be provided
esigncore:
secret:
container:
name: CUSTOM_ESIGN_REGISTRY_NAME
user: CUSTOM_ESIGN_REGISTRY_USER
password: CUSTOM_ESIGN_REGISTRY_PASSWORD
email: CUSTOM_ESIGN_REGISTRY_EMAIL
In case the registry is only accessible to the cluster and does not use/require any credentials, or an esign is supposed to use an existing secret, it is necessary to either disable the image pull secret functionality or indicate to the chart to utilize an existing secret, otherwise possible authentication errors may occur, this is also achievable but adding the following properties in the custom_values file.
esigncore:
imagePullSecrets:
name: CUSTOM_ESIGN_SECRET #name of secret to be used
create: false # false if disable default secret creation
enabled: true # true if secret must be used, use false if objective is to completly disable usage of pulling secrets,
Custom database
eSign helm chart per default installs and embedded H2 database.
This must not be configured in real environments, so you must use other database of your choice, specially for production environments.
In order to connect to a custom database, it is necessary to provide to eSign the necessary database configurations to connect to this database (Refer to the Database configuration section).
Bellow it is presented an example of this user-supplied values.yaml file.
esigncore:
env:
custom:
- name: ESIGN_SERVER_DB_DIALECT
value: org.hibernate.dialect.PostgreSQLDialect
- name: ESIGN_SERVER_DB_DRIVER
value: org.postgresql.Driver
- name: ESIGN_SERVER_DB_URL
value: jdbc:postgresql://my_database_host:5432/esign
customFromCredentialSecret: (1)
- name: ESIGN_SERVER_DB_USER
key: esignServerDBUser
value: myDatabaseUser
- name: ESIGN_SERVER_DB_PASSWORD
key: esignServerDBPassword
value: myDatabasePassword
1 | All entries values in customFromCredentialSecret are automatically stored in the secret created with this helm chart |
Since you are using a custom database instance it is necessary to create the respective database, owner of the custom instance of the database. |
Init container
Kubernetes init containers run in the same Pod as the main application container, though with a separate life cycle. The pattern is often used to initialize a state or configuration for the application running in the main container.
The same is applied in this solution, eSign init container as the purpose to verify if a custom database is running as is accepting connections.
Since as seen above it is possible to change repository images and database configurations, init container is also configurable to follow applied customizations.
Activate Init container
Since running eSign database as a service or on any other pod on the cluster, it might find useful to validate if the database that eSign is going to use is deployed, running and expecting connections, for this scenario it is possible to enabled the init container.
esigncore:
init:
wait:
enabled: true
Configure init container eSign database
If custom database is defined and the service to connect to this database is not the one defined by default
As such it is necessary to the helm chart the host and port, of the exposed tcp connection to the database.
init:
wait:
enabled: true
database:
host: CUSTOM_DATABASE_HOST_URL
port: CUSTOM_DATABASE_HOST_PORT
Configure init container repository
If custom repositories are enabled the same can be applied for the init container
esigncore:
init:
wait:
enabled: true # To activate init container, default is false
image:
name: CUSTOM_INIT_CONTAINER_IMAGE_NAME
repository: CUSTOM_INIT_CONTAINER_IMAGE_REPOSITORY
tag: CUSTOM_INIT_CONTAINER_IMAGE_VERSION
Image pull secret used for the init container is the same for the main container, so when updating the main image pull secret as defined in section, it will also be used for this repository. In other words, they must be in the same repository. |
Security Context
A Security Context defines privilege and access control settings for a Pod or Container.
In Kubernetes hosts where SELinux or AppArmor is installed it may be required to define the for each container what permissions and users needed.
In order to configure each container security context the following fields may be added to the user-supplied values file. Helm template will copy all key-value pairs bellow the securityContext and podSecurityContext and copy them to the respective deployment spec.
esigncore:
securityContext: {}
podSecurityContext: {}
All key-value pairs will be copied as-is. Meaning the SecurityContext key-pairs must be valid and recognizable by the Kubernetes API. |
Node Selector
A node selector specifies a map of key-value pairs. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node.
In order to do this for each application the nodeSelector must be enabled, and the respective key-value pairs should be added.
esigncore:
nodeSelector: {}
tolerations: {}
Container resources
When Kubernetes schedules a Pod, containers require enough resources (CPU and RAM) to run. If no resources are defined Pods will consume all resources available to the node and may interfere with other running applications.
Resource requests and limits are used to control the resources used by the pod. Resource requests are what the container is guaranteed to get in order to execute, if the available resources are not available the pod is not scheduled. The resource limits, is applied in order to guarantee that a pod does not cross a certain threshold.
esigncore:
resources:
enabled: true
requests:
memory: 2Gi
cpu: 1000m
limits:
memory: 4Gi
cpu: 2000m
Probes
eSign implements two container probes, the liveness and readiness probes.
The LivenessProbe, iIndicates whether the Container is still running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is always Success.
While the ReadinessProbe, indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.
The default configuration for the probes can be edited within the following helm structure
esigncore:
probes:
authorization:
enabled: true # Disables request authentication
header: Authorization # Header name, if custom authorization module is defined this can be updated as well
token: "Basic YWRtaW46YWRtaW4" #Update token with user:password information in base64
liveness:
failureThreshold: 5
initialDelaySeconds: 120
path: eSignServer/rest/v1/monitoring/health # Default health endpoint
periodSeconds: 30
port: 8080 # Default application port
successThreshold: 1
timeoutSeconds: 25
readiness:
failureThreshold: 5
initialDelaySeconds: 30
path: eSignServer/rest/v1/monitoring/health # Default health endpoint
periodSeconds: 30
port: 8080 # Default application port
successThreshold: 1
timeoutSeconds: 15
Configure eSign application
The next set of configurations are application base and exclusive of eSign Server chart. The following configurations can be achieved by manipulating the default configmap or by creating a custom configmap with custom configuration. It is however, mandatory that this configmaps exist on the cluster and are correctly associated with eSign Server chart in order to guarantee the proper functioning of the application.
Logging level
eSign Server uses log4j configuration to define logging level for the application. If to triage any issue with the application is necessary to change the logging level this can be achieved editing the esign-cm-configs configmap.
In the data property, you can defy a log4j2.xml file, that will replace the default log4j2 configuration. For log4j2.xml configuration examples please check the official documentation
Additionally, if it is necessary to make this configurations permanent. It is possible to disable the default configmap creation and make eSign Server chart use a custom defined configmap.
esigncore:
configmaps:
configs:
name: CUSTOM_CONFIGMAP
create: false
eSign config
It is also possible to configure esign.config file in kubernetes, editing the esign-cm-configs configmap.
For more information about the contents and structure of esign.config, see here. |
Additionally, if it is necessary to make this configurations permanent. It is possible to disable the default configmap creation and make eSign Server chart use a custom defined configmap.
esigncore:
configmaps:
configs:
name: CUSTOM_CONFIGMAP
create: false
License
eSign ships with a license-free demo mode for evaluation and early development purposes. Though ideal for a quick start and exploration, the demo mode limits some features and cannot be used in productive environments.
For Kubernetes deploy Licenses, there will be two possibilities to configure the users license. The first option is to enable the default license secret and pass your license content in its value, as such:
esigncore:
secret:
license:
enabled: true
create: true
data:
license: <<base64 string with license content >>
The second possibility is to create a custom secret, and provide its reference in the eSign Server helm chart.
esigncore:
secret:
license:
name: CUSTOM_LICENSE_SECRET_NAME
enabled: true
create: false
The content of this secret mus contain a data object, with a property with the name license.lic
data:
license.lic: <<base64 string with license content >>
Signing certificate
It is vital that document and signatures carry your institution’s digital certificate, as this is the one guarantee that the document was signed within your organization. By default eSign signs the documents with a demo signing certificate, below are the required steps to configured eSign to use your institution’s certificate.
Just as defined for the licenses, this signing certificate can be enabled by having a custom secret on the cluster and associate the same with the eSign Server helm chart.
esigncore:
secret:
license:
name: CUSTOM_CERTIFICATE_SECRET_NAME
enabled: true
create: false
The content of this secret must contain a data object, with a property with the name esign.integrity.p12
data:
esign.integrity.p12: <<base64 string with p12 file content >>
Or passing the certificate content directly in the value of the secret.
esigncore:
secret:
license:
enabled: true
create: false
data:
certificate: <<base64 string with p12 file content >>
Languages
To configure language packages it is necessary to configure the esign-cm-langs configmap, or providing the eSign Server helm chart the name of a custom language configuration.
To configure the languages it is necessary to follow a file name pattern, where the first part is to describe if the language configuration is to be set on the client or the server, follow by an "_" and the name of the file. i.e : "l10n.client_esign.en-US.properties"
data:
l10n.client_esign.en-US.properties: |-
...
l10n.server_language_en.properties: |-
...
esigncore:
configmaps:
lang:
enabled: true
name: CUSTOM_LANGUAGE_CONFIGMAP
Ingress
eSign helm charts do not install any ingress out-of-the-box. Which means, in case esign should be exposed to the internet, that for each environment/cluster it will be necessary to a route (openshift) or ingress rules must be created.
This ingress/routes should point to client-esigncore service in port 8080.
Scaling
Scaling the application can be achieved by updating the number of running replicas, either by updating the default value of the number of replicas.
esigncore:
app:
stateless:
replicaCount: N
Or by manually update the number of running instances.
# scaling stateless replicas
$ kubectl scale deploy esigncore --replicas=N
# scaling statefull replicas
$ kubectl scale sts esigncore --replicas=N
Auto Scaling
Horizontal Pod Autoscaler (HPA) is the kubernetes capability to automatically scale the number of Pods of a StatefulSet or Deployment.
This automation can be configured taking in consideration the metrics of the resources that are being used by the Pods.
Follow official documentation to apply HPA configurations.