Skip to content

Commit

Permalink
Merge branch 'pegasystems:master' into srs-oauth-secret
Browse files Browse the repository at this point in the history
  • Loading branch information
maracle6 authored Dec 12, 2023
2 parents 68526fb + f453712 commit 3dcdc57
Show file tree
Hide file tree
Showing 26 changed files with 638 additions and 125 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,8 @@ Digest: <encryption verification>
Status: Downloaded pega-docker.downloads.pega.com/platform/pega:<version>
```

All Docker images for Pega Platform releases that are in Standard Support undergo a nightly rebuild that applies the latest available updates and patches to all third-party components. To take advantage of these updates, you must redeploy your Pega Platform with the latest available images. Pega does not guarantee nightly rebuilds for Pega Platform releases in Extended Support and stops rebuilding images for Pega Platform releases that are out of Extended Support.

For details about downloading and then pushing Docker images to your repository for your deployment, see [Using Pega-provided Docker images](https://docs.pega.com/bundle/platform-88/page/platform/deployment/client-managed-cloud/pega-docker-images-manage.html).

From Helm chart versions `2.2.0` and above, update your Pega Platform version to the latest patch version.
Expand Down
12 changes: 12 additions & 0 deletions charts/backingservices/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,15 @@ purge-es-secrets:

external-es-secrets:
kubectl create secret generic srs-certificates --from-file=$(PATH_TO_CERTIFICATE) --namespace=$(NAMESPACE)

purge-srs-secrets:
kubectl delete secrets srs-certificates --namespace=$(NAMESPACE) || true

purge-secrets: purge-es-secrets
make purge-srs-secrets

update-secrets: purge-secrets
make es-prerequisite

update-external-es-secrets: purge-srs-secrets
make external-es-secrets
36 changes: 30 additions & 6 deletions charts/backingservices/charts/srs/README.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ spec:
key: password
- name: PATH_TO_TRUSTSTORE
value: "/usr/share/{{ .Values.srsStorage.certificateName | default "elastic-certificates.p12"}}"
- name: PATH_TO_KEYSTORE
value: "{{ .Values.srsStorage.certificatePassword | default ""}}"
{{- end}}
- name: APPLICATION_HOST
value: "0.0.0.0"
Expand Down
3 changes: 2 additions & 1 deletion charts/backingservices/requirements.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
# NOTE: For kubernetes version >=1.25 or Elasticsearch version 7.17.9,
# use 7.17.3 for the elasticsearch 'version' parameter below (for Elasticsearch version 7.17.9, you will still use 7.17.9 in the backingservices values.yaml).
# To disable deploying Elasticsearch in SRS, set the 'srs.srsStorage.provisionInternalESCluster' parameter in backingservices values.yaml to false.
# The dependencies.version parameter refers to the Elastcisearch Helm chart version, not Elasticsearch server version.
dependencies:
- name: elasticsearch
version: "7.10.2"
version: "7.17.3"
repository: https://helm.elastic.co/
condition: srs.srsStorage.provisionInternalESCluster
- name: constellation
Expand Down
11 changes: 7 additions & 4 deletions charts/backingservices/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,10 @@ srs:
tls:
enabled: false
# To specify a certificate used to authenticate an external Elasticsearch service (with tls.enabled: true and srsStorage.provisionInternalESCluster: false),
# uncomment the following line to specify the TLS certificate name for your Elasticsearch service.
# uncomment the following lines to specify the TLS certificate name with password for your Elasticsearch service.
# Default certificatePassword value will be empty if not used.
# certificateName: "Certificate_Name"
# certificatePassword: "password"
# Set srs.srsStorage.basicAuthentication.enabled: true to enable the use of basic authentication to your Elasticsearch service
# whether is it running as an internalized or externalized service in your SRS cluster.
basicAuthentication:
Expand All @@ -80,9 +82,10 @@ constellation:
# based on helm charts defined at https://github.com/elastic/helm-charts/tree/master/elasticsearch and may be modified
# as per runtime and storage requirements.
elasticsearch:
# for internally provisioned elasticsearch version is set to 7.10.2. Use this imageTag configuration to update it to 7.16.3 or
# 7.17.9 if required. However, we strongly recommend to use version 7.17.9.
imageTag: 7.10.2
# For internally provisioned Elasticsearch server, the imageTag parameter is set by default to 7.17.9, which is the recommended Elasticsearch server version
# for k8s version >= 1.25.
# Use this parameter to change it to 7.10.2 or 7.16.3 for k8s version < 1.25 and make sure to update the Elasticsearch helm chart version in requirements.yaml.
imageTag: 7.17.9
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
Expand Down
1 change: 0 additions & 1 deletion charts/pega/Ephemeral-web-tier-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,6 @@ global:
<env name="database/databases/PegaRULES/dataSource" value="java:comp/env/jdbc/PegaRULES"/>
<env name="database/databases/PegaDATA/dataSource" value="java:comp/env/jdbc/PegaRULES"/>
<env name="security/urlaccesslog" value="NORMAL" />
<env name="security/urlaccessmode" value="WARN" />
<!-- Most nodes have a 'default' classification and for these nodes, no additional changes need to be made to this file. However,
if this is node has a non-general purpose, for example: 'Agent', then the node classification setting should be added to this file. -->
<!--env name="initialization/nodeclassification" value="Agent" / -->
Expand Down
21 changes: 19 additions & 2 deletions charts/pega/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ Node classification is the process of separating nodes by purpose, predefining t

Specify the list of Pega node types for this deployment. For more information about valid node types, see the Pega Community article on [Node Classification].

[Node types for client-managed cloud environments](https://community.pega.com/knowledgebase/articles/performance/node-classification)
[Node types for VM-based and containerized deployments](https://docs.pega.com/bundle/platform-88/page/platform/system-administration/node-types-on-premises.html)

Example:

Expand Down Expand Up @@ -451,7 +451,7 @@ Parameter | Description | Defau
`cpuLimit` | CPU limit for pods in the current tier. | `4`
`memRequest` | Initial memory request for pods in the current tier. | `12Gi`
`memLimit` | Memory limit for pods in the current tier. | `12Gi`
`initialHeap` | Specify the initial heap size of the JVM. | `4096m`
`initialHeap` | Specify the initial heap size of the JVM. | `8192m`
`maxHeap` | Specify the maximum heap size of the JVM. | `8192m`

### JVM Arguments
Expand Down Expand Up @@ -652,6 +652,23 @@ tier:
webXML: |-
...
```
### Pega compressed configuration files

To use [Pega configuration files](https://github.com/pegasystems/pega-helm-charts/blob/master/charts/pega/README.md#pega-configuration-files) in compressed format when deploying Pega Platform, replace each file with its compressed format file by completing the following steps:

1) Compress each configuration file using the following command in your local terminal:
```
- cat "<path_to_actual_uncompressed_file_in_local>" | gzip -c | base64
```
Example for a prconfig.xml file:
```
cat "pega-helm-charts/charts/pega/config/deploy/prconfig.xml" | gzip -c | base64
```
2) Provide the file content with the output of the command for each file executed.
3) Set the `compressedConfigurations` in values.yaml to `true`, as in the following example:
```yaml
compressedConfigurations: true
```

### Pega diagnostic user

Expand Down
4 changes: 2 additions & 2 deletions charts/pega/charts/hazelcast/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@ client:
clusterName: "PRPC"
# Server side settings for Hazelcast
server:
java_opts: "-Xms820m -Xmx820m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/hazelcast/logs/heapdump.hprof
-XX:+UseParallelGC -Xlog:gc*,gc+phases=debug:file=/opt/hazelcast/logs/gc.log:time,pid,tags:filecount=5,filesize=3m"
java_opts: "-XX:MaxRAMPercentage=80.0 -XX:InitialRAMPercentage=80.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/hazelcast/logs/heapdump.hprof
-XX:+UseParallelGC -Xlog:gc*,gc+phases=debug:file=/opt/hazelcast/logs/gc.log:time,pid,tags:filecount=5,filesize=3m -XshowSettings:vm"
jmx_enabled: "true"
health_monitoring_level: "OFF"
operation_generic_thread_count: ""
Expand Down
1 change: 0 additions & 1 deletion charts/pega/config/deploy/prconfig.xml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
<env name="database/databases/PegaRULES/dataSource" value="java:comp/env/jdbc/PegaRULES"/>
<env name="database/databases/PegaDATA/dataSource" value="java:comp/env/jdbc/PegaRULES"/>
<env name="security/urlaccesslog" value="NORMAL" />
<env name="security/urlaccessmode" value="WARN" />
<!-- Most nodes have a 'default' classification and for these nodes, no additional changes need to be made to this file. However,
if this is node has a non-general purpose, for example: 'Agent', then the node classification setting should be added to this file. -->
<!--env name="initialization/nodeclassification" value="Agent" / -->
Expand Down
2 changes: 1 addition & 1 deletion charts/pega/config/deploy/prlog4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@
<AppenderRef ref="DATAFLOW"/>
</Logger>
<!-- Added for Usage Metrics -->
<AsyncLogger name="com.pega.pegarules.session.internal.usagemetrics" additivity="true" level="USAGE">
<AsyncLogger name="com.pega.pegarules.session.internal.usagemetrics" additivity="true" level="info">
<AppenderRef ref="USAGEMETRICS"/>
</AsyncLogger>
</Loggers>
Expand Down
2 changes: 1 addition & 1 deletion charts/pega/templates/_helpers.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ until cqlsh -u {{ $cassandraUser | quote }} -p {{ $cassandraPassword | quote }}
{{- if .node.initialHeap }}
value: "{{ .node.initialHeap }}"
{{- else }}
value: "4096m"
value: "8192m"
{{- end }}
# Maximum JVM heap size, equivalent to -Xmx
- name: MAX_HEAP
Expand Down
4 changes: 4 additions & 0 deletions charts/pega/templates/_pega-pdb.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ kind: PodDisruptionBudget
metadata:
name: {{ .name }}-pdb
namespace: {{ .root.Release.Namespace }}
{{- if .pdb.labels }}
labels:
{{ toYaml .pdb.labels | indent 4 }}
{{- end }}
spec:
{{- if .pdb.minAvailable }}
minAvailable: {{ .pdb.minAvailable }}
Expand Down
4 changes: 4 additions & 0 deletions charts/pega/templates/_pega_hpa.tpl
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ kind: HorizontalPodAutoscaler
metadata:
name: {{ .name | quote}}
namespace: {{ .root.Release.Namespace }}
{{- if .hpa.labels }}
labels:
{{ toYaml .hpa.labels | indent 4 }}
{{- end }}
spec:
scaleTargetRef:
apiVersion: apps/v1
Expand Down
6 changes: 6 additions & 0 deletions charts/pega/templates/pega-environment-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,12 @@ data:
JDBC_TIMEOUT_PROPERTIES_RO: {{ .Values.global.jdbc.readerConnectionTimeoutProperties }}
{{- else }}
JDBC_TIMEOUT_PROPERTIES_RO: ""
{{- end }}
# compression flag to decompress the config files of Pega Installation.
{{- if .Values.global.compressedConfigurations }}
IS_PEGA_CONFIG_COMPRESSED: "{{ .Values.global.compressedConfigurations }}"
{{- else }}
IS_PEGA_CONFIG_COMPRESSED: "false"
{{- end }}
# Rules schema of the Pega installation
{{ if (eq (include "performUpgradeAndDeployment" .) "true") }}
Expand Down
4 changes: 4 additions & 0 deletions charts/pega/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,10 @@ global:
serviceHost: "API_SERVICE_ADDRESS"
httpsServicePort: "SERVICE_PORT_HTTPS"

# Set the `compressedConfigurations` parameter to `true` when the configuration files under charts/pega/config/deploy are in compressed format.
# For more information, see the “Pega compressed configuration files” section in the Pega Helm chart documentation.
compressedConfigurations: false

# Specify the Pega tiers to deploy
tier:
- name: "web"
Expand Down
4 changes: 2 additions & 2 deletions docs/upgrading-pega-deployment-zero-downtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ To complete an upgrade with zero downtime, configure the following settings in
- In the installer section of the Helm chart, update the following:

- Specify `installer.installerMountVolumeClaimName` persistent Volume Claim name. This is a client-managed PVC for mounting upgrade artifacts.
- Specify `installer.upgradeType: "Zero-downtime"` to use the zero-downtime upgrade process.
- Specify `installer.upgradeType: "zero-downtime"` to use the zero-downtime upgrade process.
- Specify `installer.targetRulesSchema: "<target-rules-schema-name>"` and `installer.targetDataSchema: "<target-data-schema-name>"` for the new target and data schema name that the process creates in your existing database for the upgrade process.
- Specify `installer.upgrade.automaticResumeEnabled` to support resuming from point of failure

Expand Down Expand Up @@ -202,4 +202,4 @@ In this document, you specify that the Helm chart always “deploys” by using
- `action.execute: upgrade-deploy`
- `installer.upgrade.upgradeType: custom`
- `installer.upgrade.upgradeSteps: disable_cluster_upgrade` to run disable_cluster_upgrade
- Resume the upgrade process by using the `helm upgrade release --namespace mypega` command. For more information, see - [Upgrading your Pega Platform deployment using the command line](https://github.com/pegasystems/pega-helm-charts/blob/master/docs/upgrading-pega-deployment-zero-downtime.md#upgrading-your-pega-platform-deployment-using-the-command-line).
- Resume the upgrade process by using the `helm upgrade release --namespace mypega` command. For more information, see - [Upgrading your Pega Platform deployment using the command line](https://github.com/pegasystems/pega-helm-charts/blob/master/docs/upgrading-pega-deployment-zero-downtime.md#upgrading-your-pega-platform-deployment-using-the-command-line).
3 changes: 3 additions & 0 deletions terratest/src/test/backingservices/srs-deployment_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,9 @@ func VerifyDeployment(t *testing.T, pod *k8score.PodSpec, expectedSpec srsDeploy
require.Equal(t, "PATH_TO_TRUSTSTORE", pod.Containers[0].Env[envIndex].Name)
require.Equal(t, "/usr/share/elastic-certificates.p12", pod.Containers[0].Env[envIndex].Value)
envIndex++
require.Equal(t, "PATH_TO_KEYSTORE", pod.Containers[0].Env[envIndex].Name)
require.Equal(t, "", pod.Containers[0].Env[envIndex].Value)
envIndex++
}
require.Equal(t, "APPLICATION_HOST", pod.Containers[0].Env[envIndex].Name)
require.Equal(t, "0.0.0.0", pod.Containers[0].Env[envIndex].Value)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ func VerifyClusteringServiceEnvironmentConfig(t *testing.T, yamlContent string,
UnmarshalK8SYaml(t, statefulInfo, &clusteringServiceEnvConfigMap)
clusteringServiceEnvConfigData := clusteringServiceEnvConfigMap.Data
require.Equal(t, clusteringServiceEnvConfigData["NAMESPACE"], "default")
require.Equal(t, clusteringServiceEnvConfigData["JAVA_OPTS"], "-Xms820m -Xmx820m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/hazelcast/logs/heapdump.hprof -XX:+UseParallelGC -Xlog:gc*,gc+phases=debug:file=/opt/hazelcast/logs/gc.log:time,pid,tags:filecount=5,filesize=3m")
require.Equal(t, clusteringServiceEnvConfigData["JAVA_OPTS"], "-XX:MaxRAMPercentage=80.0 -XX:InitialRAMPercentage=80.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt/hazelcast/logs/heapdump.hprof -XX:+UseParallelGC -Xlog:gc*,gc+phases=debug:file=/opt/hazelcast/logs/gc.log:time,pid,tags:filecount=5,filesize=3m -XshowSettings:vm")
require.Equal(t, clusteringServiceEnvConfigData["SERVICE_NAME"], "clusteringservice-service")
require.Equal(t, clusteringServiceEnvConfigData["MIN_CLUSTER_SIZE"], "3")
require.Equal(t, clusteringServiceEnvConfigData["JMX_ENABLED"], "true")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@
<AppenderRef ref="DATAFLOW"/>
</Logger>
<!-- Added for Usage Metrics -->
<AsyncLogger name="com.pega.pegarules.session.internal.usagemetrics" additivity="true" level="USAGE">
<AsyncLogger name="com.pega.pegarules.session.internal.usagemetrics" additivity="true" level="info">
<AppenderRef ref="USAGEMETRICS"/>
</AsyncLogger>
</Loggers>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
<env name="database/databases/PegaRULES/dataSource" value="java:comp/env/jdbc/PegaRULES"/>
<env name="database/databases/PegaDATA/dataSource" value="java:comp/env/jdbc/PegaRULES"/>
<env name="security/urlaccesslog" value="NORMAL" />
<env name="security/urlaccessmode" value="WARN" />
<!-- Most nodes have a 'default' classification and for these nodes, no additional changes need to be made to this file. However,
if this is node has a non-general purpose, for example: 'Agent', then the node classification setting should be added to this file. -->
<!--env name="initialization/nodeclassification" value="Agent" / -->
Expand Down
15 changes: 15 additions & 0 deletions terratest/src/test/pega/data/values_hpa_custom_label.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
global:
tier:
- name: "web"
hpa:
enabled: true
labels:
web-label: "somevalue"
web-other-label: "someothervalue"
- name: "batch"
hpa:
enabled: true
labels:
batch-label: "batchlabel"
batch-other-label: "anothervalue"
Loading

0 comments on commit 3dcdc57

Please sign in to comment.