概述
笔记内容用于记录 Pega 容器化部署步骤,方便后期参考和调研引用,仅供学习参考。
背景信息
PEGA 版本 8.7 之后推荐采用容器化部署的方式搭建环境,因此需要调研如何通过容器化部署 PEGA Infinity 23。
前置条件
请确保以下软件已安装,版本可根据个人需求进行变更
- helm: v3.15.4 安装笔记
- kubernetes: v1.29.3 安装笔记
- containerd: v1.7.14 安装笔记
- postgresql: 11.3 Linux安装笔记 容器化部署笔记
部署步骤
容器镜像申请
下载 pega-helm-charts github 仓库
使用 git
下载 pega-helm-charts
root@k8s-master:/vm-server/dev/pega23/helm# git clone https://github.com/pegasystems/pega-helm-charts.git
添加 PEGA Helm 仓库
使用 helm repo add
指令添加 PEGA helm 仓库
root@k8s-master:/vm-server/dev/pega23/helm# helm repo add pega https://pegasystems.github.io/pega-helm-charts
"pega" has been added to your repositories
验证仓库列表
root@k8s-master:/vm-server/dev/pega23/helm# helm search repo pega
NAME CHART VERSION APP VERSION DESCRIPTION
pega/pega 3.24.2 Pega installation on kubernetes
pega/addons 3.24.2 1.0 A Helm chart for Kubernetes
pega/backingservices 3.24.2 Helm Chart to provision the latest Search and R...
下载 helm 配置文件
根据自身需求分别下载 pega/pega, pega/addons and pega/backingservices 中的 values 配置文件。
本次笔记内容包含 pega/pega
与 pega/backingservices
下载 pega/pega
root@k8s-master:/vm-server/dev/pega23/helm# helm inspect values pega/pega > pega.yaml
下载 pega/backingservices
root@k8s-master:/vm-server/dev/pega23/helm# helm inspect values pega/backingservices > backingservices.yaml
查看文件
root@k8s-master:/vm-server/dev/pega23/helm# ls -alh
total 52K
drwxr-xr-x 2 root root 4.0K Aug 28 17:09 .
drwxr-xr-x 5 root root 4.0K Aug 28 16:53 ..
-rw-r--r-- 1 root root 9.0K Aug 28 17:09 backingservices.yaml
-rw-r--r-- 1 root root 30K Aug 28 17:08 pega.yaml
安装 PEGA 数据库
Note: 请确保 PostgreSQL
数据库可用!
更新 pega.yaml
配置文件,仅记录需要更新的部分,其他部分保持不变。
---
global:
# This values.yaml file is an example. For more information about
# each configuration option, see the project readme.
# Enter your Kubernetes provider.
provider: "k8s"
# Enter a name for the deployment if using multi-tenant services such as the Search and Reporting Service.
customerDeploymentId:
deployment:
# The name specified will be used to prefix all of the Pega pods (replacing "pega" with something like "app1-dev>
name: "pega"
# Deploy Pega nodes
actions:
execute: "install"
******
jdbc:
# url Valid values are:
#
# Oracle jdbc:oracle:thin:@//localhost:1521/dbName
# IBM DB/2 z / OS jdbc:db2://localhost:50000/dbName
# IBM DB/2 jdbc:db2://localhost:50000/dbName:fullyMaterializeLobData=true;fullyMaterializeInputStre>
# progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
# SQL Server jdbc:sqlserver://localhost:1433;databaseName=dbName;selectMethod=cursor;sendStringParame>
# PostgreSQL jdbc:postgresql://localhost:5432/dbName
url: "jdbc:postgresql://pega23-install-svc:5432/postgres"
# driverClass -- jdbc class. Valid values are:
#
# Oracle oracle.jdbc.OracleDriver
# IBM DB/2 com.ibm.db2.jcc.DB2Driver
# SQL Server com.microsoft.sqlserver.jdbc.SQLServerDriver
# PostgreSQL org.postgresql.Driver
driverClass: "org.postgresql.Driver"
# pega.database.type Valid values are: mssql, oracledate, udb, db2zos, postgres
dbType: "postgres"
# For databases that use multiple JDBC driver files (such as DB2), specify comma separated values for 'driverU>
driverUri: "https://jdbc.postgresql.org/download/postgresql-42.7.3.jar"
username: "postgres"
password: "postgres"
# To avoid exposing username & password, leave the jdbc.password & jdbc.username parameters empty (no quotes),
# configure JDBC username & password parameters in the External Secrets Manager, and enter the external secret f>
# make sure the keys in the secret should be DB_USERNAME and DB_PASSWORD respectively
external_secret_name: ""
# CUSTOM CONNECTION PROPERTIES
# Use the connectionProperties parameter to pass connection settings to your deployment
# by adding a list of semi-colon-delimited required connection setting. The list string must end with ";".
# For example, you can set a custom authentication using Azure Managed Identity and avoid using a password.
# To pass an Authentication method and a managed identity, MSI Client ID,
# set: connectionProperties: "Authentication=ActiveDirectoryMSI;msiClientId=<your Azure Managed Identity>;"
connectionProperties: ""
rulesSchema: "pega_rules"
dataSchema: "pega_data"
customerDataSchema: "cust_data"
******
docker:
# If using a custom Docker registry, supply the credentials here to pull Docker images.
registry:
url: "pega-docker.downloads.pega.com"
username: "pega_provide_UserID"
password: "pega_provide_APIKey"
# To avoid exposing Docker registry details, create secrets to manage your Docker registry credentials.
# Specify secret names as an array of comma-separated strings in double quotation marks using the imagePullSecre>
imagePullSecretNames: []
# Docker image information for the Pega docker image, containing the application server.
******
# Cassandra automatic deployment settings.
cassandra:
enabled: false
persistence:
enabled: true
resources:
requests:
memory: "4Gi"
cpu: 2
limits:
memory: "8Gi"
cpu: 4
******
# Pega Installer settings.
installer:
image: "pega-docker.downloads.pega.com/platform/installer:8.23.1"
# Set the initial administrator@pega.com password for your installation. This will need to be changed at first lo>
# The adminPassword value cannot start with "@".
adminPassword: "install"
备注:如无需
cassandra
数据库 则将enabled
参数设置为 false
开始安装
root@k8s-master:/vm-server/dev/pega23/helm# helm install mypega pega/pega --namespace pega23-install --values pega.yaml
NAME: mypega
LAST DEPLOYED: Sat Sep 21 13:00:32 2024
NAMESPACE: pega23-install
STATUS: deployed
REVISION: 1
TEST SUITE: None
查看helm部署信息
root@k8s-master:/vm-server/dev/pega23/k8s_deploy# helm list -n pega23-install
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mypega pega23-install 1 2024-09-21 13:00:32.365884441 +0800 HKT deployed pega-3.24.2
查看容器信息
root@k8s-master:/vm-server/dev/pega23/helm# kubectl get pod -n pega23-install
NAME READY STATUS RESTARTS AGE
pega-db-install-r4hrw 0/1 ContainerCreating 0 101s
等待PEGA镜像下载并安装PEGA数据库,此操作需要一些时间,安装过程中可通过 logs 指令查看安装日志。
root@k8s-master:/home/myserver# kubectl logs -f --tail 20 pega-db-install-r4hrw -n pega23-install
[java] 2.41 minutes, or 2.15% of the total time can be attributed to load 51349 records from IndexReference_20.jar
[java] 0.0 minutes, or 0.0% of the total time can be attributed to load 21 records from rule-html-section_pega-bix.zip
[java] 0.0 minutes, or 0.0% of the total time can be attributed to load 14 records from rule-obj-fieldvalue_pega-shareddata.zip
[java] 0.03 minutes, or 0.03% of the total time can be attributed to load 27 records from rule-obj-jsp.zip
[java] 0.0 minutes, or 0.0% of the total time can be attributed to load 22 records from rule-obj-fieldvalue_pega-lp-mobile.zip
[java] 0.04 minutes, or 0.04% of the total time can be attributed to load 372 records from rule-edit-validate.zip
[java] 0.01 minutes, or 0.01% of the total time can be attributed to load 1 records from rule-urlmappings.zip
[java]
[java] ************************************************************************
[java]
[copy] Copying 1 file to /opt/pega/kit/scripts/logs
[zip] Warning: skipping zip archive /opt/pega/kit/scripts/logs/Install Finalization-CollectedLogs_2024-09-21_07-36-20.zip because no files were included.
[echo] Cleaning up temp directory...
[delete] Deleting directory /opt/pega/temp/PegaInstallTemp-21-September-2024-07.36.20
Install Finalization:
[echo] PegaRULES Process Commander database load complete.
BUILD SUCCESSFUL
Total time: 156 minutes 0 seconds
机械硬盘比较慢~~~
安装 kafka 作为 pega 的外部 stream 节点
- kafka 安装参考文档:安装笔记
本次安装返回信息,用于后续部署外部 stream 节点时使用。
root@k8s-master:/vm-server/dev/pega23/kafka# helm install pega23-install-kafka bitnami/kafka -n pega23-install -f kafka.yaml
NAME: pega23-install-kafka
LAST DEPLOYED: Sat Sep 21 14:00:05 2024
NAMESPACE: pega23-install
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 30.1.3
APP VERSION: 3.8.0
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
pega23-install-kafka.pega23-install.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
pega23-install-kafka-controller-0.pega23-install-kafka-controller-headless.pega23-install.svc.cluster.local:9092
The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
- SASL authentication
To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="user1" \
password="$(kubectl get secret pega23-install-kafka-user-passwords --namespace pega23-install -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";
To create a pod that you can use as a Kafka client run the following commands:
kubectl run pega23-install-kafka-client --restart='Never' --image bitnami/kafka:3.7 --namespace pega23-install --command -- sleep infinity
kubectl cp --namespace pega23-install /path/to/client.properties pega23-install-kafka-client:/tmp/client.properties
kubectl exec --tty -i pega23-install-kafka-client --namespace pega23-install -- bash
PRODUCER:
kafka-console-producer.sh \
--producer.config /tmp/client.properties \
--broker-list pega23-install-kafka-controller-0.pega23-install-kafka-controller-headless.pega23-install.svc.cluster.local:9092 \
--topic test
CONSUMER:
kafka-console-consumer.sh \
--consumer.config /tmp/client.properties \
--bootstrap-server pega23-install-kafka.pega23-install.svc.cluster.local:9092 \
--topic test \
--from-beginning
WARNING: Rolling tag detected (bitnami/kafka:3.7), please note that it is strongly recommended to avoid using rolling tags in a production environment.
+info https://docs.vmware.com/en/VMware-Tanzu-Application-Catalog/services/tutorials/GUID-understand-rolling-tags-containers-index.html
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
- controller.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
⚠ SECURITY WARNING: Original containers have been substituted. This Helm chart was designed, tested, and validated on multiple platforms using a specific set of Bitnami and Tanzu Application Catalog containers. Substituting other containers is likely to cause degraded security and performance, broken chart features, and missing environment variables.
Substituted images detected:
- bitnami/kafka:%!s(float64=3.7)
部署 PEGA Backing Services
SRS (Search and Reporting Service)
修改 backingservices.yaml
配置文件
---
global:
imageCredentials:
registry: "pega-docker.downloads.pega.com"
username: "pega_provide_UserID"
password: "pega_provide_APIKey"
# Specify the value of your Kubernetes provider
k8sProvider: "k8s"
# Search and Reporting Service (SRS) Configuration
srs:
# Set srs.enabled=true to enable SRS
enabled: true
# specify unique name for the deployment based on org app and/or srs applicable environment name. eg: acme-demo-dev-srs
deploymentName: "pega23-install-srs"
# Configure the location of the busybox image that is used during the deployment process of
# the internal Elasticsearch cluster
busybox:
image: "alpine:3.18.3"
imagePullPolicy: "IfNotPresent"
srsRuntime:
# Number of pods to provision
replicaCount: 1
# docker image of the srs-service, platform-services/search-n-reporting-service:dockerTag
srsImage: "platform-services/search-n-reporting-service:1.28.1"
# To avoid exposing Docker credentials, optionally create a separate Docker config secret.
# Specify secret names as an array of comma-separated strings. For example: ["secret1", "secret2"]
imagePullSecretNames: []
env:
# AuthEnabled may be set to true when there is an authentication mechanism in place between SRS and Pega Infinity.
AuthEnabled: false
# When `AuthEnabled` is `true`, enter the appropriate public key URL. When `AuthEnabled` is `false`(default), leave this parameter empty.
OAuthPublicKeyURL: ""
******
# as per runtime and storage requirements.
elasticsearch:
# For internally provisioned Elasticsearch server, the imageTag parameter is set by default to 7.17.9, which is the
# recommended Elasticsearch server version for k8s version >= 1.25.
# Use this parameter to change it to 7.10.2 or 7.16.3 for k8s version < 1.25 and make sure to update the Elasticsearch helm chart version in requirements.yaml.
imageTag: 7.17.9
replicas: 1
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx1024m -Xms1024m"
******
# For deployments that use TLS-based authentication to an internal Elasticsearch service in the SRS cluster,
# uncomment and appropriately add below lines under esConfig.elasticsearch.yml.
# xpack.security.http.ssl.enabled: true
# xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
# xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
esConfig:
elasticsearch.yml: |
xpack.security.enabled: false
xpack.security.transport.ssl.enabled: false
# xpack.security.transport.ssl.verification_mode: certificate
# xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
# xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
# Uncomment the below lines if you want to deploy/upgrade Elasticsearch server version >= 8.x by adding below lines under esConfig.elasticsearch.yml.
# action.destructive_requires_name: false
# ingest.geoip.downloader.enabled: false
# Use this section to include additional, supported environmental variables for Elasticsearch basic authentication.
# The parameter values can be read from a specified secrets file.
#extraEnvs:
# - name: ELASTIC_PASSWORD
# valueFrom:
# secretKeyRef:
# name: srs-elastic-credentials
# key: password
# - name: ELASTIC_USERNAME
# valueFrom:
# secretKeyRef:
# name: srs-elastic-credentials
# key: username
resources:
requests:
cpu: "1000m"
memory: "2Gi"
limits:
cpu: "2000m"
memory: "3Gi"
#volumeClaimTemplate:
# accessModes: ["ReadWriteOnce"]
# resources:
# requests:
# configure volume size of the elasticsearch nodes based on search data storage requirements. The default storage size from elasticsearch is 30Gi.
# storage: 30Gi
******
准备 es-pv.yaml 配置文件
---
# pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pega23-install-es-pv-1
spec:
capacity:
storage: 1024Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /vm-server/dev/pega23/data/srs/es/pv/pv1
创建 es pv
root@k8s-master:/vm-server/dev/pega23/helm/srs# kubectl apply -f es-pv.yaml
persistentvolume/pega23-install-es-pv-1 created
准备 es-basic-auth-secret.yaml 配置文件
apiVersion: v1
kind: Secret
metadata:
name: elastic-certificates
namespace: pega23-install
type: kubernetes.io/basic-auth
stringData:
username: es-user # kubernetes.io/basic-auth 类型的必需字段
password: es-password # kubernetes.io/basic-auth 类型的必需字段
创建 es basic auth
root@k8s-master:/vm-server/dev/pega23/helm/srs# kubectl apply -f es-basic-auth-secret.yaml
secret/elastic-certificates created
开始部署SRS
root@k8s-master:/vm-server/dev/pega23/helm/srs# helm install pega23-install-srs pega/backingservices -n pega23-install --values backingservices.yaml
NAME: pega23-install-srs
LAST DEPLOYED: Mon Sep 23 21:41:09 2024
NAMESPACE: pega23-install
STATUS: deployed
REVISION: 1
查看部署结果
root@k8s-master:/vm-server# kubectl get pod -n pega23-install
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 20m
pega-db-install-r4hrw 0/1 Completed 0 2d9h
pega23-install-db-5f846c6784-2xfwr 1/1 Running 0 2d10h
pega23-install-kafka-controller-0 1/1 Running 5 (6h5m ago) 2d4h
pega23-install-srs-568c47f469-4sp6p 1/1 Running 0 27m
查看 srs 启动 log
root@k8s-master:/vm-server# kubectl logs -f pega23-install-srs-568c47f469-4sp6p --tail 10 -n pega23-install
Defaulted container "srs-service" out of: srs-service, wait-for-internal-es-cluster (init)
Micronaut (v3.9.4)
{"timestamp":"2024-09-23T13:55:20.350","message":"Established active environments: [k8s, cloud]","logger":"io.micronaut.context.DefaultApplicationContext$RuntimeConfiguredEnvironment","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:22.245","message":"Setting log level 'INFO' for logger: 'com.pega.fnx.search'","logger":"io.micronaut.logging.PropertiesLoggingLevelsConfigurer","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:23.167","message":"cache2k starting. version=1.6.0.Final","logger":"org.cache2k.core.Cache2kCoreProviderImpl","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:26.144","message":"Establishing connection to Elasticsearch cluster at 'http://elasticsearch-master.pega23-install.svc:9200'","logger":"com.pega.fnx.search.storage.es.ElasticsearchConnector","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:26.464","message":"Elasticsearch cluster 'elasticsearch' contacted successfully, server version: '7.17.9' (compatible), node number: '1', cluster health: 'GREEN'","logger":"com.pega.fnx.search.storage.es.ElasticsearchConnector","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:41.370","message":"Updated cluster settings","logger":"com.pega.fnx.search.startup.ESBootstrapTask","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:41.371","message":"Bootstrapped Elasticsearch","logger":"com.pega.fnx.search.startup.ESBootstrapTask","thread":"main","level":"INFO"}
{"timestamp":"2024-09-23T13:55:47.628","message":"Startup completed in 31089ms. Server Running: http://0.0.0.0:8080","logger":"io.micronaut.runtime.Micronaut","thread":"main","level":"INFO"}
Constellation App Static Content
准备 constellation-app.yaml 配置文件
---
enabled: true
deployment:
name: "pega23-install-constellation"
# Cloud provider details. Accepted values are aws, gke and k8s
provider: "k8s"
# For aws cloud provider enter your acm certificate ARN here.
# awsCertificateArn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx
# Customer assets must be stored on a persistent storage volume. Create a volume claim and provide the name.
customerAssetVolumeClaimName: pega23-install-constellation-app-pvc
# Docker repos and tag for image
docker:
# If using a custom Docker registry, supply the credentials here to pull Docker images.
registry:
url: pega-docker.downloads.pega.com
username: pega_provide_UserID
password: pega_provide_APIKey
# Provide pre-defined image pull secret names if desired
imagePullSecretNames: []
# Docker image information for the Pega docker image, containing the application server.
constellation:
image: constellation-appstatic-service/docker-image:1.7.0
imagePullPolicy: IfNotPresent
logLevel: info
urlPath: /c11n
# set memoryRequest & memoryLimit to Limit memory usage for container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory
# resources:
# requests:
# memory: # ex: 128Mi for MB or 2Gi for GB
# limits:
# memory: # ex: 256Mi for MB or 4Gi for GB
securityContext:
seccompProfile:
# set seccompProfile to RuntimeDefault to not disable default seccomp profile https://kubernetes.io/docs/tutorials/security/seccomp/
type: Unconfined # RuntimeDefault
# DO NOT CHANGE readOnlyRootFilesystem VALUE to true, C11N SERVICE WON'T WORK AS EXPECTED
readOnlyRootFilesystem: false
# set allowPrivilegeEscalation to false to Restrict container from acquiring additional privileges https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
allowPrivilegeEscalation: true # false
# Service
service:
port: 3000
targetPort: 3000
serviceType: NodePort
# Ingress
ingress:
enabled: false
domain: YOUR_CUSTOM_DOMAIN_NAME_HERE
ingressClassName:
# Additional annotations for the ingress can be specified here
annotations:
tls:
enabled: false
secretName:
# Deployment Spec
replicas: 1
livenessProbe:
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
准备 pv 和 pvc 配置文件 pega23-install-constellation-app-pvc.yaml
---
# pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pega23-install-constellation-app-pv
spec:
capacity:
storage: 1024Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /vm-server/dev/pega23/data/constellation/app/cust_asset
---
# pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pega23-install-constellation-app-pvc
namespace: pega23-install
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1024Gi
创建 pv, pvc
root@k8s-master:/vm-server/dev/pega23/helm/constellation-app# kubectl apply -f pega23-install-constellation-app-pvc.yaml
persistentvolume/pega23-install-constellation-app-pv created
persistentvolumeclaim/pega23-install-constellation-app-pvc created
开始部署 constellation app
root@k8s-master:/vm-server/dev/pega23/helm/constellation-app# helm install /vm-server/dev/pega23/helm/pega-helm-charts/charts/backingservices/charts/constellation --values constellation-app.yaml -n pega23-install --generate-name
NAME: constellation-1727156358
LAST DEPLOYED: Tue Sep 24 13:39:18 2024
NAMESPACE: pega23-install
STATUS: deployed
REVISION: 1
TEST SUITE: None
查看部署结果
root@k8s-master:/vm-server/dev/pega23/helm/constellation-messaging# kubectl logs -f --tail 10 pega23-install-constellation-6855b95595-bvp7q -n pega23-install
GET 200 /c11n/buildInfo.json 3ms
GET 200 /c11n/buildInfo.json 3ms
INFO: 1727157351922 GET 10.244.1.115:3000 /c11n/buildInfo.json undefined
INFO: 1727157351923 GET 10.244.1.115:3000 /c11n/buildInfo.json undefined
GET 200 /c11n/buildInfo.json 5ms
GET 200 /c11n/buildInfo.json 6ms
INFO: 1727157381923 GET 10.244.1.115:3000 /c11n/buildInfo.json undefined
INFO: 1727157381924 GET 10.244.1.115:3000 /c11n/buildInfo.json undefined
GET 200 /c11n/buildInfo.json 4ms
GET 200 /c11n/buildInfo.json 5ms
Constellation Messaging
准备 constellation-messaging.yaml 配置文件
---
enabled: true
deployment:
name: "pega23-install-constellation-messaging"
# Cloud provider details
provider: "k8s"
# Docker repos and tag for image
docker:
# If using a custom Docker registry, supply the credentials here to pull Docker images.
registry:
url: pega-docker.downloads.pega.com
username: pega_provide_UserID
password: pega_provide_APIKey
# To avoid exposing Docker credentials, create a separate Docker config secret.
# Specify secret names as an array of comma-separated strings. For example: ["secret1", "secret2"]
imagePullSecretNames: []
# Docker image information for the Pega docker image, containing the application server.
messaging:
image: constellation-messaging/docker-image:5.4.0
imagePullPolicy: IfNotPresent
urlPath: /c11n-messaging
# set memoryRequest & memoryLimit to Limit memory usage for container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory
# resources:
# requests:
# memory: # ex: 128Mi for MB or 2Gi for GB
# limits:
# memory: # ex: 256Mi for MB or 4Gi for GB
securityContext:
seccompProfile:
# set seccompProfile to RuntimeDefault to not disable default seccomp profile https://kubernetes.io/docs/tutorials/security/seccomp/
type: Unconfined # RuntimeDefault
# DO NOT CHANGE readOnlyRootFilesystem VALUE to true, C11N MESSAGING WON'T WORK AS EXPECTED
readOnlyRootFilesystem: false
# set allowPrivilegeEscalation to false to Restrict container from acquiring additional privileges https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
allowPrivilegeEscalation: true # false
# Service
service:
port: 3000
targetPort: 3000
serviceType: NodePort
# An ingress will be provisioned if a hostname is defined, or omitted if the hostname is empty.
# ingressClassName and annotations are optional and will be included if defined.
# Due to the diverse requirements for ingresses and TLS configuration, it may be necessary to define the ingress separately from this chart.
ingress:
enabled: false
domain: YOUR_CUSTOM_DOMAIN_NAME_HERE
ingressClassName:
# Additional annotations for the ingress can be specified here
annotations:
tls:
enabled: false
secretName:
# Deployment Spec
replicas: 1
livenessProbe:
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 3
开始部署 constellation messaging
root@k8s-master:/vm-server/dev/pega23/helm/constellation-messaging# helm install /vm-server/dev/pega23/helm/pega-helm-charts/charts/backingservices/charts/constellation-messaging/ --values constellation-messaging.yaml --generate-name -n pega23-install
NAME: constellation-messaging-1727157357
LAST DEPLOYED: Tue Sep 24 13:55:57 2024
NAMESPACE: pega23-install
STATUS: deployed
REVISION: 1
TEST SUITE: None
查看部署结果
root@k8s-master:/home/myserver# kubectl logs -f --tail 10 pega23-install-constellation-messaging-6b78d8dcc8-pg5mh -n pega23-install
INFO: 1727158956049 broadcastAlive(5000)
INFO: 1727158956050 broadcastAlive() ... done
INFO: 1727158961050 broadcastAlive(5000)
INFO: 1727158961050 broadcastAlive() ... done
INFO: 1727158966051 broadcastAlive(5000)
INFO: 1727158966051 broadcastAlive() ... done
INFO: 1727158971051 broadcastAlive(5000)
INFO: 1727158971051 broadcastAlive() ... done
INFO: 1727158976051 broadcastAlive(5000)
INFO: 1727158976052 broadcastAlive() ... done
GET 200 /c11n-messaging/ping 1ms
[22:0x7f5973d2a4a0] 1499289 ms: Scavenge 14.6 (15.4) -> 14.0 (15.4) MB, 1.47 / 0.01 ms (average mu = 0.995, current mu = 0.970) task;
GET 200 /c11n-messaging/ping
INFO: 1727158981052 broadcastAlive(5000)
INFO: 1727158981052 broadcastAlive() ... done
部署 PEGA 节点
查看相关 svc 配置,用于配置文件使用
root@k8s-master:/vm-server/dev/pega23/helm/pega# kubectl get svc -n pega23-install
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.106.73.43 <none> 9200/TCP,9300/TCP 16h
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 16h
pega23-install-constellation NodePort 10.97.226.125 <none> 3000:30517/TCP 22m
pega23-install-constellation-messaging NodePort 10.110.151.123 <none> 3000:30980/TCP 6m12s
pega23-install-kafka ClusterIP 10.111.238.109 <none> 9092/TCP 3d
pega23-install-kafka-controller-headless ClusterIP None <none> 9094/TCP,9092/TCP,9093/TCP 3d
pega23-install-srs ClusterIP 10.103.14.212 <none> 8080/TCP,80/TCP 16h
pega23-install-svc NodePort 10.98.142.126 <none> 5432:30029/TCP 3d2h
修改 pega.yaml 配置文件
---
global:
# This values.yaml file is an example. For more information about
# each configuration option, see the project readme.
# Enter your Kubernetes provider.
provider: "k8s"
# Enter a name for the deployment if using multi-tenant services such as the Search and Reporting Service.
customerDeploymentId:
deployment:
# The name specified will be used to prefix all of the Pega pods (replacing "pega" with something like "app1-dev").
name: "pega"
# Deploy Pega nodes
actions:
execute: "deploy"
# Add custom certificates to be mounted to container
# to support custom certificates as plain text (less secure), pass them directly using the certificates parameter;
# to support multiple custom certificates as external secrets, specify each of your external secrets
# as an array of comma-separated strings using the certificatesSecrets parameter.
certificatesSecrets: []
certificates: {}
# Add krb5.conf file content here.
# Feature is used for Decisioning data flows to fetch data from Kafka or HBase streams
kerberos: {}
# If a storage class to be passed to the VolumeClaimTemplates in search and stream pods, it can be specified here:
storageClassName: ""
# Provide JDBC connection information to the Pega relational database
# If you are installing or upgrading on IBM DB2, update the udb.conf file in the /charts/pega/charts/installer/config/udb directory with any additional connection properties.
jdbc:
# url Valid values are:
#
# Oracle jdbc:oracle:thin:@//localhost:1521/dbName
# IBM DB/2 z / OS jdbc:db2://localhost:50000/dbName
# IBM DB/2 jdbc:db2://localhost:50000/dbName:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;
# progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
# SQL Server jdbc:sqlserver://localhost:1433;databaseName=dbName;selectMethod=cursor;sendStringParametersAsUnicode=false
# PostgreSQL jdbc:postgresql://localhost:5432/dbName
url: "jdbc:postgresql://pega23-install-svc:5432/postgres"
# driverClass -- jdbc class. Valid values are:
#
# Oracle oracle.jdbc.OracleDriver
# IBM DB/2 com.ibm.db2.jcc.DB2Driver
# SQL Server com.microsoft.sqlserver.jdbc.SQLServerDriver
# PostgreSQL org.postgresql.Driver
driverClass: "org.postgresql.Driver"
# pega.database.type Valid values are: mssql, oracledate, udb, db2zos, postgres
dbType: "postgres"
# For databases that use multiple JDBC driver files (such as DB2), specify comma separated values for 'driverUri'
driverUri: "https://jdbc.postgresql.org/download/postgresql-42.7.3.jar"
username: "postgres"
password: "postgres"
# To avoid exposing username & password, leave the jdbc.password & jdbc.username parameters empty (no quotes),
# configure JDBC username & password parameters in the External Secrets Manager, and enter the external secret for the credentials
# make sure the keys in the secret should be DB_USERNAME and DB_PASSWORD respectively
external_secret_name: ""
# CUSTOM CONNECTION PROPERTIES
# Use the connectionProperties parameter to pass connection settings to your deployment
# by adding a list of semi-colon-delimited required connection setting. The list string must end with ";".
# For example, you can set a custom authentication using Azure Managed Identity and avoid using a password.
# To pass an Authentication method and a managed identity, MSI Client ID,
# set: connectionProperties: "Authentication=ActiveDirectoryMSI;msiClientId=<your Azure Managed Identity>;"
connectionProperties: ""
rulesSchema: "pega_rules"
dataSchema: "pega_data"
customerDataSchema: "cust_data"
customArtifactory:
# If you use a secured custom artifactory to manager your JDBC driver,
# provide the authentication details below by filling in the appropriate authentication section,
# either basic or apiKey.
authentication:
# Provide the basic authentication credentials or the API key authentication details to satisfy your custom artifactory authentication mechanism.
basic:
username: ""
password: ""
apiKey:
headerName: ""
value: ""
# To avoid exposing basic.username,basic.password,apiKey.headerName,apiKey.value parameters, configure the
# basic.username,basic.password,apiKey.headerName,apiKey.value parameters in External Secrets Manager, and enter the external secret for the credentials
# make sure the keys in the secret should be CUSTOM_ARTIFACTORY_USERNAME , CUSTOM_ARTIFACTORY_PASSWORD , CUSTOM_ARTIFACTORY_APIKEY_HEADER , CUSTOM_ARTIFACTORY_APIKEY
external_secret_name: ""
# Leave customArtifactory.enableSSLVerification enabled to ensure secure access to your custom artifactory;
# when customArtifactory.enableSSLVerification is false, SSL verification is skipped and establishes an insecure connection.
enableSSLVerification: true
# Provide a required domain certificate for your custom artifactory; if none is required, leave this field blank.
certificate:
docker:
# If using a custom Docker registry, supply the credentials here to pull Docker images.
registry:
url: "pega-docker.downloads.pega.com"
username: "pega_provide_UserID"
password: "pega_provide_APIKey"
# To avoid exposing Docker registry details, create secrets to manage your Docker registry credentials.
# Specify secret names as an array of comma-separated strings in double quotation marks using the imagePullSecretNames parameter. For example: ["secret1", "secret2"]
imagePullSecretNames: []
# Docker image information for the Pega docker image, containing the application server.
pega:
image: "platform/pega:8.23.1"
utilityImages:
busybox:
image: busybox:1.31.0
imagePullPolicy: IfNotPresent
k8s_wait_for:
image: pegasystems/k8s-wait-for
imagePullPolicy: "IfNotPresent"
# waitTimeSeconds: 2
# maxRetries: 1
# Upgrade specific properties
upgrade:
# Configure only for aks/pks
# Run "kubectl cluster-info" command to get the service host and https service port of kubernetes api server.
# Example - Kubernetes master is running at https://<service_host>:<https_service_port>
kube-apiserver:
serviceHost: "API_SERVICE_ADDRESS"
httpsServicePort: "SERVICE_PORT_HTTPS"
# Set the `compressedConfigurations` parameter to `true` when the configuration files under charts/pega/config/deploy are in compressed format.
# For more information, see the “Pega compressed configuration files” section in the Pega Helm chart documentation.
compressedConfigurations: false
pegaDiagnosticUser: ""
pegaDiagnosticPassword: ""
# Specify the Pega tiers to deploy
tier:
- name: "web"
# Create an interactive tier for web users. This tier uses
# the WebUser node type and will be exposed via a service to
# the load balancer.
nodeType: "WebUser"
# Pega requestor specific properties
requestor:
# Inactivity time after which requestor is passivated
passivationTimeSec: 900
service:
# For help configuring the service block, see the Helm chart documentation
# https://github.com/pegasystems/pega-helm-charts/blob/master/charts/pega/README.md#service
httpEnabled: true
port: 80
targetPort: 8080
serviceType: NodePort
# Use this parameter to deploy a specific type of service using the serviceType parameter and specify the type of service in double quotes.
# This is an optional value and should be used based on the use case.
# This should be set only in case of eks, gke and other cloud providers. This option should not be used for k8s and minikube.
# For example if you want to deploy a service of type LoadBalancer, uncomment the following line and specify serviceType: "LoadBalancer"
# serviceType: ""
# Specify the CIDR ranges to restrict the service access to the given CIDR range.
# Each new CIDR block should be added in a separate line.
# Should be used only when serviceType is set to LoadBalancer.
# Uncomment the following lines and replace the CIDR blocks with your configuration requirements.
# loadBalancerSourceRanges:
# - "123.123.123.0/24"
# - "128.128.128.64/32"
# Define custom ports for service here. If you want to use the custom ports for other services, please use the same configuration for those services.
# customServicePorts:
# - name: <name>
# port: <port>
# targetPort: <port>
# To configure TLS between the ingress/load balancer and the backend, set the following:
tls:
enabled: false
# To avoid entering the certificate values in plain text, configure the keystore, keystorepassword, cacertificate parameter
# values in the External Secrets Manager, and enter the external secret name below
# make sure the keys in the secret should be TOMCAT_KEYSTORE_CONTENT, TOMCAT_KEYSTORE_PASSWORD and ca.crt respectively
# In case of providing multiple secrets, please provide them in comma separated string format.
external_secret_names: []
# If using tools like cert-manager to generate certificates, please provide the keystore name that is autogenerated by the external tool.
# Default is TOMCAT_KEYSTORE_CONTENT
external_keystore_name: ""
# If using external secrets operator and not using standard Password Key, please provide the key for keystore password.
# Default is TOMCAT_KEYSTORE_PASSWORD
external_keystore_password: ""
keystore:
keystorepassword:
port: 443
targetPort: 8443
# set the value of CA certificate here in case of baremetal/openshift deployments - CA certificate should be in base64 format
# pass the certificateChainFile file if you are using certificateFile and certificateKeyFile
cacertificate:
# provide the SSL certificate and private key as a PEM format
certificateFile:
certificateKeyFile:
# if you will deploy traefik addon chart and enable traefik, set enabled=true; otherwise leave the default setting.
traefik:
enabled: false
# the SAN of the certificate present inside the container
serverName: ""
# set insecureSkipVerify=true, if the certificate verification has to be skipped
insecureSkipVerify: false
ingress:
enabled: false
# For help configuring the ingress block including TLS, see the Helm chart documentation
# https://github.com/pegasystems/pega-helm-charts/blob/master/charts/pega/README.md#ingress
# Enter the domain name to access web nodes via a load balancer.
# e.g. web.mypega.example.com
domain: "YOUR_WEB_NODE_DOMAIN"
# Configure custom path for given host along with pathType. Default pathType is ImplementationSpecific.
# path:
# pathType:
tls:
# Enable TLS encryption
enabled: true
# secretName:
# useManagedCertificate: false
# ssl_annotation:
# For Openshift, Pega deployments enable TLS to secure the connection
# from the browser to the router by creating the route using reencrypt termination policy.
# Add your certificate, the corresponding key using the appropriate .pem or .crt format and
# specify a CA certificate to validate the endpoint certificate.
certificate:
key:
cacertificate:
replicas: 1
javaOpts: ""
deploymentStrategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
livenessProbe:
port: 8081
# Optionally overridde the default or add additional resource specifications.
# initialHeap: "8192m"
# maxHeap: "8192m"
resources:
requests:
memory: "12Gi"
cpu: 3
limits:
memory: "12Gi"
cpu: 4
# To configure an alternative user for custom image, set value for runAsUser.
# To configure an alternative group for volume mounts, set value for fsGroup
# See, https://github.com/pegasystems/pega-helm-charts/blob/master/charts/pega/README.md#security-context
# securityContext:
# runAsUser: 9001
# fsGroup: 0
hpa:
enabled: true
# To configure behavior specifications for hpa, set the required scaleUp & scaleDown values.
# See, https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#stabilization-window
# behavior:
# scaleDown:
# stabilizationWindowSeconds: 600
# key/value pairs that are attached to the pods (https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/)
# podLabels:
# Topology spread constraints to control the placement of your pods across nodes, zones, regions, or other user-defined topology domains.
# For more information please refer https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
# If you want to apply topology spread constraints in other tiers, please use the same configuration as described here.
# topologySpreadConstraints:
# - maxSkew: <integer>
# topologyKey: <string>
# whenUnsatisfiable: <string>
# labelSelector: <object>
# Tolerations allow the scheduler to schedule pods with matching taints.
# For more information please refer https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration
# If you want to apply tolerations to other tiers, please use the same configuration as described here.
# tolerations:
# - key: "key1"
# operator: "Equal"
# value: "value1"
# effect: "NoSchedule"
# Set enabled to true to include a Pod Disruption Budget for this tier.
# To enable this budget, specifiy either a pdb.minAvailable or pdb.maxUnavailable
# value and comment out the other parameter.
pdb:
enabled: false
minAvailable: 1
# maxUnavailable: "50%"
- name: "batch"
# Create a background tier for batch processing. This tier uses
# a collection of background node types and will not be exposed to
# the load balancer.
nodeType: "BackgroundProcessing,Search,Batch,RealTime,Custom1,Custom2,Custom3,Custom4,Custom5,BIX"
replicas: 1
javaOpts: ""
deploymentStrategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
livenessProbe:
port: 8081
# To configure an alternative user for your custom image, set value for runAsUser
# To configure an alternative group for volume mounts, set value for fsGroup
# See, https://github.com/pegasystems/pega-helm-charts/blob/master/charts/pega/README.md#security-context
# securityContext:
# runAsUser: 9001
# fsGroup: 0
hpa:
enabled: true
# Set enabled to true to include a Pod Disruption Budget for this tier.
# To enable this budget, specifiy either a pdb.minAvailable or pdb.maxUnavailable
# value and comment out the other parameter.
pdb:
enabled: false
minAvailable: 1
# maxUnavailable: "50%"
resources:
requests:
memory: "12Gi"
cpu: 3
limits:
memory: "12Gi"
cpu: 4
#- name: "stream"
# Create a stream tier for queue processing. This tier deploys
# as a stateful set to ensure durability of queued data. It may
# be optionally exposed to the load balancer.
# Note: Stream tier is deprecated, please enable externalized Kafka service configuration under External
******
#resources:
# requests:
# memory: "12Gi"
# cpu: 3
#limits:
# memory: "12Gi"
#cpu: 4
# External services
# Cassandra automatic deployment settings.
cassandra:
enabled: false
persistence:
enabled: true
resources:
requests:
memory: "4Gi"
cpu: 2
limits:
memory: "8Gi"
cpu: 4
# DDS (external Cassandra) connection settings.
# These settings should only be modified if you are using a custom Cassandra deployment.
# To deploy Pega without Cassandra, comment out or delete the following dds section and set
# the cassandra.enabled property above to false.
#dds:
# A comma separated list of hosts in the Cassandra cluster.
# externalNodes: ""
# TCP Port to connect to cassandra.
# port: "9042"
******
# default, after you enable this property, CSV files will be written to the Pega Platform work directory.
#csvMetricsEnabled: false
# Enable reporting of DDS SDK metrics to your Pega Platform logs.
#logMetricsEnabled: false
# Elasticsearch deployment settings.
# Note: This Elasticsearch deployment is used for Pega search, and is not the same Elasticsearch deployment used by the EFK stack.
# These search nodes will be deployed regardless of the Elasticsearch configuration above.
pegasearch:
image: ""
memLimit: "3Gi"
replicas: 1
# Set externalSearchService to true to use the Search and Reporting Service.
# Refer to the README document to configure SRS as a search functionality provider under this section.
externalSearchService: true
externalURL: http://pega23-install-srs
srsAuth:
enabled: false
url: ""
clientId: ""
authType: ""
privateKey: ""
external_secret_name: ""
******
# Hazelcast settings (applicable from Pega 8.6)
hazelcast:
# Hazelcast docker image for platform version 8.6 through 8.7.x
image: "platform/clustering-service:1.3.9"
# Hazelcast docker image for platform version 8.8 and later
clusteringServiceImage: "platform/clustering-service:1.3.9"
# Setting below to true will deploy Pega Platform using a client-server Hazelcast model for version 8.6 through 8.7.x.
# Note: Make sure to set this value as "false" in case of Pega Platform version before "8.6". If not set this will fail the installation.
enabled: true
# Setting below to true will deploy Pega Platform using a client-server Hazelcast model for version 8.8 and later.
clusteringServiceEnabled: true
# Setting related to Hazelcast migration.
migration:
# Set to `true` to initiate the migration job.
initiateMigration: false
# Reference the `platform/clustering-service-kubectl` Docker image to create the migration job.
migrationJobImage: "YOUR_MIGRATION_JOB_IMAGE:TAG"
# Set to `true` when migrating from embedded Hazelcast.
embeddedToCSMigration: false
# No. of initial members to join
replicas: 1
# UserName in the client-server Hazelcast model authentication. This setting is exposed and not secure.
username: "hazelcast_user"
# Password in the client-server Hazelcast model authentication. This setting is exposed and not secure.
password: "hazelcast_password"
# To avoid exposing username and password parameters, leave these parameters empty and configure
# these cluster settings using an External Secrets Manager. Use the following keys in the secret:
# HZ_CS_AUTH_USERNAME for username and HZ_CS_AUTH_PASSWORD for password.
# Enter the external secret for these credentials below.
external_secret_name: ""
# Stream (externalized Kafka service) settings.
stream:
# Beginning with Pega Platform '23, enabled by default; when disabled, your deployment does not use a"Kafka stream service" configuration.
enabled: true
# Provide externalized Kafka service broker urls.
bootstrapServer: "pega23-install-kafka-controller-0.pega23-install-kafka-controller-headless.pega23-install.svc.cluster.local:9092"
# Provide Security Protocol used to communicate with kafka brokers. Supported values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
securityProtocol: SASL_PLAINTEXT
# If required, provide trustStore certificate file name
# When using a trustStore certificate, you must also include a Kubernetes secret name, that contains the trustStore certificate,
# in the global.certificatesSecrets parameter.
# Pega deployments only support trustStores using the Java Key Store (.jks) format.
trustStore: ""
# If required provide trustStorePassword value in plain text.
trustStorePassword: ""
# If required, provide keyStore certificate file name
# When using a keyStore certificate, you must also include a Kubernetes secret name, that contains the keyStore certificate,
# in the global.certificatesSecrets parameter.
# Pega deployments only support keyStores using the Java Key Store (.jks) format.
keyStore: ""
# If required, provide keyStore value in plain text.
keyStorePassword: ""
# If required, provide jaasConfig value in plain text.
jaasConfig: "org.apache.kafka.common.security.scram.ScramLoginModule required username='user1' password='WuS7z3Vp30';"
# If required, provide a SASL mechanism**. Supported values are: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512.
saslMechanism: SCRAM-SHA-256
# By default, topics originating from Pega Platform have the pega- prefix,
# so that it is easy to distinguish them from topics created by other applications.
# Pega supports customizing the name pattern for your Externalized Kafka configuration for each deployment.
streamNamePattern: "pega-{stream.name}"
# Your replicationFactor value cannot be more than the number of Kafka brokers. Pega recommended value is 3.
replicationFactor: "1"
# To avoid exposing trustStorePassword, keyStorePassword, and jaasConfig parameters, leave the values empty and
# configure them using an External Secrets Manager, making sure you configure the keys in the secret in the order:
# STREAM_TRUSTSTORE_PASSWORD, STREAM_KEYSTORE_PASSWORD and STREAM_JAAS_CONFIG.
# Enter the external secret name below.
external_secret_name: ""
- 如 WebUser 节点需使用 nodePort 形式开放外网访问
- 如无需 ingress 请设置为 false
- 使用文件结尾的外部 stream 节点配置,因此注释 pega embedded stream 节点配置,也可直接删除配置
- 如无需使用 cassandra 则设置为 fasle并注释或删除所有 dds 配置
- PEGA 8.23 需使用外部 SRS 配置
开始部署 pega 各节点服务
root@k8s-master:/vm-server/dev/pega23/helm/pega# helm upgrade mypega pega/pega -n pega23-install --values pega.yaml
Release "mypega" has been upgraded. Happy Helming!
NAME: mypega
LAST DEPLOYED: Tue Sep 24 14:26:46 2024
NAMESPACE: pega23-install
STATUS: deployed
REVISION: 2
TEST SUITE: None
如果是初次部署请使用 helm install 指令,本次由于安装PEGA数据库时已经使用了 install 指令部署过一次 ,因此属于第二次部署,采用 upgrade 指令更新部署内容。
查看部署状态
root@k8s-master:/vm-server/dev/pega23/helm/pega# kubectl get pod,svc -n pega23-install
NAME READY STATUS RESTARTS AGE
pod/clusteringservice-0 1/1 Running 0 67s
pod/elasticsearch-master-0 1/1 Running 0 17h
pod/pega-batch-5cbc5b86cc-gz8lk 0/1 Running 0 67s
pod/pega-hazelcast-0 1/1 Running 0 67s
pod/pega-web-574995449f-4lqtw 0/1 Running 0 67s
pod/pega23-install-constellation-6855b95595-bvp7q 1/1 Running 0 124m
pod/pega23-install-constellation-messaging-6b78d8dcc8-pg5mh 1/1 Running 0 107m
pod/pega23-install-db-5f846c6784-2xfwr 1/1 Running 0 3d4h
pod/pega23-install-kafka-controller-0 1/1 Running 7 (4h31m ago) 2d21h
pod/pega23-install-srs-568c47f469-4sp6p 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/clusteringservice-service ClusterIP None <none> 5701/TCP 68s
service/elasticsearch-master ClusterIP 10.106.73.43 <none> 9200/TCP,9300/TCP 18h
service/elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 18h
service/pega-hazelcast-service ClusterIP None <none> 5701/TCP 68s
service/pega-web NodePort 10.97.177.117 <none> 80:32614/TCP 68s
service/pega23-install-constellation NodePort 10.97.226.125 <none> 3000:30517/TCP 124m
service/pega23-install-constellation-messaging NodePort 10.110.151.123 <none> 3000:30980/TCP 107m
service/pega23-install-kafka ClusterIP 10.111.238.109 <none> 9092/TCP 3d1h
service/pega23-install-kafka-controller-headless ClusterIP None <none> 9094/TCP,9092/TCP,9093/TCP 3d1h
service/pega23-install-srs ClusterIP 10.103.14.212 <none> 8080/TCP,80/TCP 18h
service/pega23-install-svc NodePort 10.98.142.126 <none> 5432:30029/TCP 3d4h
在浏览器中输入PEGA web url,例如: http://192.168.122.109:32614/prweb (端口号为service/pega-web
映射出的端口号)
输入帐号(administrator@pega.com)密码(install)登陆并修改密码
至此,PEGA Infinity 23 部署已完成