Hardware Requirements
The following sections describe the hardware sizing guidelines to support the solution.
Use the requirements described in this chapter only as a guideline. The requirements do not take into consideration various factors that affect scalability and performance |
Bare Metal Sizing Guidelines
Minimum hardware requirements:
-
2 servers instances
-
2.3 GHz CPU (4 Cores).
-
4 GB of RAM dedicated to the JVM.
-
4 GB of SSD storage SSD (configuration and rotative logs).
Adjust RAM to match your latency requirements, average PDF sizes and the number of concurrent sessions.
Kubernetes Requirements
Minimum hardware requirements to satisfy at least a High Availability (HA) deployment:
-
2 Host nodes with 4 cores each and at least 16GB of RAM;
-
At least 500Gb of SSD storage per host node;
-
Node interconnect should be at least Gigabit, preferably 10 Gigabit;
-
Swap file or partition disabled (as per Docker requirement)
These requirements can either be backed by physical machines (i.e. on-premises Docker Enterprise deployment) or using virtual machines (i.e. managed Kubernetes instances on Azure, AWS, Openshift, etc).
The above configurations are reference values taking into account a minimal High Availability deployment for one environment and without any kind of Geo-Replication and/or Disaster Recovery. Actual values depend on the components actually deployed and KPI targets. |
Environment Requirements
This section describes environment setup. I.e., required Operating System and software version requirements.
To setup the environment the following requirements must be fulfilled:
-
Kubernetes (commonly stylized as k8s) version 1.18+ (validated against 1.18.x and 1.20.x).
-
Helm (k8s correspondent version) validated against versions 2.16.x and 3.5.x
-
Docker (k8s correspondent version);
-
Linux as a host operating system. Windows as a host is explicitly not supported. It must support as a minimum the required docker version for k8s.
-
Red Hat Enterprise Linux version 7.0+ based distributions (i.e. RHEL 7+, CentOS 7+, Oracle Linux 7+);
-
SUSE Linux Enterprise based distributions version 12.2+/15+;
-
Debian Linux based distributions version 9+ (Strech);
-
Ubuntu Server based distributions version LTS starting with 16.04 (i.e. 18.04, 20.04, etc.).
-
If necessary, see corresponding upstream project documentation.
The use of non-server variants of Linux should allow the solution to work as expected but is not officially supported. |
Having SELinux enabled may require additional configurations which are out of the scope of the support provided by the solution. Please check with the corresponding Linux distribution documentation. |
Sizing considerations
This section describes component scaling considerations, i.e. what should scale and when and expected loads for sizing.
In the context of a Kubernetes cluster deployment there are two kinds of scaling: host scaling and component scaling. Host scaling refers to the number of host nodes (or Virtual Machines) backing the cluster and Component scaling refers to component (or container) scaling.
Host scaling should be executed when the backing nodes start experiencing either CPU, memory or I/O pressure, in which case a scale up operation should be considered and, conversely, when the nodes are using very few resources of the cluster a scale down operation may be considered.
Component scaling should be executed, or configured to execute automatically, for scale up operations when there is enough available host capacity and an increase on demand is expected. On the other hand, on low demand situations a scale down operation can be executed to free up resources to allow a host scale down.
The components that should scale are usually the Application Layer. Other components, like the relational database scaling should be considered only according to use case specifications.