Henry
发布于 2025-02-21 / 28 阅读
0
0

PEGA - Minikube 容器化部署

简介

基于 Minikube 容器化部署 PEGA Infinity 23。

前置条件

详细步骤

第一步: 克隆 pega-helm-charts

myserver@pega-minikube-poc:~/pega$ git clone https://github.com/pegasystems/pega-helm-charts.git
Cloning into 'pega-helm-charts'...
remote: Enumerating objects: 7928, done.
remote: Counting objects: 100% (548/548), done.
remote: Compressing objects: 100% (344/344), done.
remote: Total 7928 (delta 412), reused 223 (delta 204), pack-reused 7380 (from 4)
Receiving objects: 100% (7928/7928), 48.48 MiB | 480.00 KiB/s, done.
Resolving deltas: 100% (4972/4972), done.

第二步: 添加 pega helm 仓库

myserver@pega-minikube-poc:~/pega$ helm repo add pega https://pegasystems.github.io/pega-helm-charts 
"pega" has been added to your repositories

查看仓库信息

myserver@pega-minikube-poc:~/pega$ helm search repo pega
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
pega/pega               3.26.1                          Pega installation on kubernetes                   
pega/addons             3.26.1          1.0             A Helm chart for Kubernetes                       
pega/backingservices    3.26.1                          Helm Chart to provision the latest Search and R...

第三步: 创建 namespace

myserver@pega-minikube-poc:~/pega$ minikube kubectl --  create namespace pega-poc
namespace/pega-poc created

第四步: 获取 values-minimal.yaml 配置文件

myserver@pega-minikube-poc:~/pega$ cp pega-helm-charts/charts/pega/values-minimal.yaml  ./
myserver@pega-minikube-poc:~/pega$ ls 
pega-helm-charts  values-minimal.yaml

第五步: 修改values-minimal.yaml配置文件安装 PEGA 数据库

---
global:
  # This values.yaml file is an example of a minimal Pega
  # deployment configuration.  For more information about
  # configuration options, see the project readme.

  # Enter your Kubernetes provider.
  provider: "k8s"

  # Enter a name for the deployment if using multi-tenant services such as the Search and Reporting Service.
  customerDeploymentId:

  # Deploy Pega nodes
  actions:
    execute: "install"
  # Add custom certificates to be mounted to container
  # to support custom certificates as plain text (less secure), pass them directly using the certificates parameter;
  # to support multiple custom certificates as external secrets, specify each of your external secrets
  # as an array of comma-separated strings using the certificatesSecrets parameter.
  certificatesSecrets: []
  certificates:

  # Add krb5.conf file content here.
  # Feature is used for Decisioning data flows to fetch data from Kafka or HBase streams
  kerberos: {}

  # Set to true to comply with NIST SP 800-53 and NIST SP 800-131.
  highlySecureCryptoModeEnabled: false

  # If a storage class to be passed to the VolumeClaimTemplates in search and stream pods, it can be specified here:
  storageClassName: ""
  # Provide JDBC connection information to the Pega relational database
  #   If you are installing or upgrading on IBM DB2, update the udb.conf file in the /charts/pega/charts/installer/config/udb directory with any additional connection properties.
  jdbc:
    #   url     Valid values are:
    #
    #   Oracle              jdbc:oracle:thin:@//localhost:1521/dbName
    #   IBM DB/2 z / OS         jdbc:db2://localhost:50000/dbName
    #   IBM DB/2            jdbc:db2://localhost:50000/dbName:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;
    #                       progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
    #   SQL Server          jdbc:sqlserver://localhost:1433;databaseName=dbName;selectMethod=cursor;sendStringParametersAsUnicode=false
    #   PostgreSQL          jdbc:postgresql://localhost:5432/dbName
    url: "jdbc:postgresql://192.168.49.1:30054/postgres"
    #   driverClass     -- jdbc class.  Valid values are:
    #
    #   Oracle              oracle.jdbc.OracleDriver
    #   IBM DB/2            com.ibm.db2.jcc.DB2Driver
    #   SQL Server          com.microsoft.sqlserver.jdbc.SQLServerDriver
    #   PostgreSQL          org.postgresql.Driver
    driverClass: "org.postgresql.Driver"
    #   pega.database.type      Valid values are: mssql, oracledate, udb, db2zos, postgres
    dbType: "postgres"
    #   For databases that use multiple JDBC driver files (such as DB2), specify comma separated values for 'driverUri'
    driverUri: "https://jdbc.postgresql.org/download/postgresql-42.7.3.jar"
    username: "postgres"
    password: "postgres"
    # To avoid exposing username & password, leave the jdbc.password & jdbc.username parameters empty (no quotes),
    # configure JDBC username & password parameters in the External Secrets Manager, and enter the external secret for the credentials
    # make sure the keys in the secret should be DB_USERNAME and DB_PASSWORD respectively
    external_secret_name: ""
    # CUSTOM CONNECTION PROPERTIES
    # Add a list of ; delimited connections properties. The list must end with ;
    # For example: connectionProperties=user=usr;password=pwd;
    connectionProperties: ""
    rulesSchema: "pega_rules"
    dataSchema: "pega_data"
    customerDataSchema: "pega_cust_data"

  ******

  docker:
    # If using a custom Docker registry, supply the credentials here to pull Docker images.
    registry:
      url: "pega-docker.downloads.pega.com"
      username: "pega_provide_UserID"
      password: "pega_provide_APIKey"
    # To avoid exposing Docker registry details, create secrets to manage your Docker registry credentials.
    # Specify secret names as an array of comma-separated strings in double quotation marks using the imagePullSecretNames parameter. For example: ["secret1", "secret2"]
    imagePullSecretNames: []
    # Docker image information for the Pega docker image, containing the application server.
    pega:
      image: "pegasystems/pega"

 ******

# Pega Installer settings
installer:
  image: "pega-docker.downloads.pega.com/platform/installer:8.23.1"
  adminPassword: "install"

******

 安装 EPGA 数据库

myserver@pega-minikube-poc:~/pega$ helm install pega-poc-install  pega/pega -n pega-poc --values ./values-minimal.yaml
NAME: pega-poc-install
LAST DEPLOYED: Wed Feb 19 17:42:53 2025
NAMESPACE: pega-poc
STATUS: deployed
REVISION: 1
TEST SUITE: None

查看安装结果

myserver@pega-minikube-poc:~/pega/install$ kubectl logs -f --tail 10 pega-db-install-ddhvt -n pega-poc
     [copy] Copying 1 file to /opt/pega/kit/scripts/logs
      [zip] Warning: skipping zip archive /opt/pega/kit/scripts/logs/Install Finalization-CollectedLogs_2025-02-20_05-03-53.zip because no files were included.
     [echo] Cleaning up temp directory...
   [delete] Deleting directory /opt/pega/temp/PegaInstallTemp-20-February-2025-05.03.54

Install Finalization:
     [echo] PegaRULES Process Commander database load complete.

BUILD SUCCESSFUL
Total time: 42 minutes 36 seconds
myserver@pega-minikube-poc:~/pega/srs$ kubectl get pod -A
NAMESPACE     NAME                               READY   STATUS      RESTARTS       AGE
kube-system   coredns-76fccbbb6b-48hjs           1/1     Running     1 (174m ago)   3h1m
kube-system   etcd-minikube                      1/1     Running     1 (175m ago)   3h1m
kube-system   kube-apiserver-minikube            1/1     Running     3 (161m ago)   3h1m
kube-system   kube-controller-manager-minikube   1/1     Running     1 (175m ago)   3h1m
kube-system   kube-proxy-vpn5c                   1/1     Running     1 (175m ago)   3h1m
kube-system   kube-scheduler-minikube            1/1     Running     1 (175m ago)   3h1m
kube-system   storage-provisioner                1/1     Running     10 (54m ago)   3h1m
pega-poc      pega-db-install-ddhvt              0/1     Completed   0              60m

第六步: 配置和启动 SRS服务

  • 配置 es 数据持久化(实际应用环境不推荐使用本地路径) 【参考链接

配置 StorageClass 

myserver@pega-minikube-poc:~/pega/srs$ cat pega-poc-es-storage-class.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: pega-poc-es-local-storage
  namespace: pega-poc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
myserver@pega-minikube-poc:~/pega/srs$ kubectl apply -f pega-poc-es-storage-class.yaml 
storageclass.storage.k8s.io/pega-poc-es-local-storage created

配置 PersistentVolume  

myserver@pega-minikube-poc:~/pega/srs$ cat pega-poc-es-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pega-poc-es-pv
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: pega-poc-es-local-storage
  local:
    path: /home/myserver/pega/srs/es/pv/pv1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - minikube

Note:  1. minikube 为节点名称;2. 请确保本地路径 /home/myserver/pega/srs/es/pv/pv1 路径在 minikube 集群中存在,不存在请下拉至 Issue 1 参考解决方案。

myserver@pega-minikube-poc:~/pega/srs$ kubectl apply -f pega-poc-es-pv.yaml 
persistentvolume/pega-poc-es-pv created
  • 准备es-basic-auth-secret.yaml ,配置 elasticsearch  认证信息。
myserver@pega-minikube-poc:~/pega/srs$ cat es-basic-auth-secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: elastic-certificates
  namespace: pega-poc
type: kubernetes.io/basic-auth
stringData:
  username: es-user      # kubernetes.io/basic-auth 类型的必需字段
  password: es-password  # kubernetes.io/basic-auth 类型的必需字段
myserver@pega-minikube-poc:~/pega/srs$ kubectl apply -f es-basic-auth-secret.yaml 
secret/elastic-certificates created
  • 获取backingservices.yaml配置文件
myserver@pega-minikube-poc:~/pega/srs$ helm inspect values pega/backingservices > backingservices.yaml
myserver@pega-minikube-poc:~/pega/srs$ ls
backingservices.yaml

修改配置文件

 myserver@pega-minikube-poc:~/pega/srs$ cat backingservices.yaml 
---
global:
  imageCredentials:
    registry: "pega-docker.downloads.pega.com"
    username: "pega_provide_UserID"
    password: "pega_provide_APIKey"
  # Specify the value of your Kubernetes provider
  k8sProvider: "k8s"

# Search and Reporting Service (SRS) Configuration
srs:
  # Set srs.enabled=true to enable SRS
  enabled: true

  # specify unique name for the deployment based on org app and/or srs applicable environment name. eg: acme-demo-dev-srs
  deploymentName: "pega-poc-srs"

  # Configure the location of the busybox image that is used during the deployment process of
  # the internal Elasticsearch cluster
  busybox:
    image: "alpine:3.20.2"
    imagePullPolicy: "IfNotPresent"

  srsRuntime:
    # Number of pods to provision
    replicaCount: 1

    # docker image of the srs-service, platform-services/search-n-reporting-service:dockerTag
    srsImage: "pega-docker.downloads.pega.com/platform-services/search-n-reporting-service:1.28.1"

    # To avoid exposing Docker credentials, optionally create a separate Docker config secret.
    # Specify secret names as an array of comma-separated strings. For example: ["secret1", "secret2"]
    imagePullSecretNames: []

    env:
      # AuthEnabled may be set to true when there is an authentication mechanism in place between SRS and Pega Infinity.
      AuthEnabled: false
      # When `AuthEnabled` is `true`, enter the appropriate public key URL. When `AuthEnabled` is `false`(default), leave this parameter empty.
      OAuthPublicKeyURL: ""

    # Use this parameter to configure values for Java options.
    javaOpts: ""

    # Set to true if you require a highly secured connection that complies with NIST SP 800-53 and NIST SP 800-131. Otherwise, set to false.
    enableSecureCryptoMode: false

    # Apply securityContext to SRS pods. Example:
    # securityContext:
    #   runAsUser: 9999
    #   fsGroup: 0

    # Apply securityContext to SRS containers. Example:
    # containerSecurityContext:
    #   allowPrivilegeEscalation: false
    #   capabilities:
    #     drop:
    #     - ALL
    #   runAsNonRoot: true
    

******

# This section specifies the configuration for deploying an internal elasticsearch cluster for use with SRS.
# The configuration for rest of the values defined under 'elasticsearch' are to define the elasticsearch cluster
# based on helm charts defined at https://github.com/elastic/helm-charts/tree/master/elasticsearch and may be modified
# as per runtime and storage requirements.
elasticsearch:
  # For internally provisioned Elasticsearch server, the imageTag parameter is set by default to 7.17.9, which is the
  # recommended Elasticsearch server version for k8s version >= 1.25.
  # Use this parameter to change it to 7.10.2 or 7.16.3 for k8s version < 1.25 and make sure to update the Elasticsearch helm chart version in requirements.yaml.
  imageTag: 7.17.9
  replicas: 1
  # Permit co-located instances for solitary minikube virtual machines.
  antiAffinity: "soft"
  # Shrink default JVM heap.
  esJavaOpts: "-Xmx1024m -Xms1024m"
  # Allocate smaller chunks of memory per pod.
  # This section specifies the elasticsearch cluster configuration for authentication and TLS.
  # If you previously set srs.srsStorage.tls.enabled: true, you must uncomment the line to use protocol: https parameter.
  # protocol: https

  # Uncomment the below lines if you want to deploy/upgrade Elasticsearch server version >= 8.x
  # createCert: false
  # secret:
  #   enabled: false
  # protocol: http

  # For deployments that use TLS-based authentication to an internal Elasticsearch service in the SRS cluster,
  # uncomment and appropriately add below lines under esConfig.elasticsearch.yml.
  # xpack.security.http.ssl.enabled: true
  # xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
  # xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12

  esConfig:
    elasticsearch.yml: |
      xpack.security.enabled: false
      xpack.security.transport.ssl.enabled: false
    #  xpack.security.transport.ssl.verification_mode: certificate
    #  xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    #  xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    # Uncomment the below lines if you want to deploy/upgrade Elasticsearch server version >= 8.x by adding below lines under esConfig.elasticsearch.yml.
    # action.destructive_requires_name: false
    # ingest.geoip.downloader.enabled: false

  # Use this section to include additional, supported environmental variables for Elasticsearch basic authentication.
  # The parameter values can be read from a specified secrets file.
  extraEnvs:
    - name: ELASTIC_PASSWORD
      valueFrom:
        secretKeyRef:
          name: srs-elastic-credentials
          key: password
    - name: ELASTIC_USERNAME
      valueFrom:
        secretKeyRef:
          name: srs-elastic-credentials
          key: username

  resources:
    requests:
      cpu: "1000m"
      memory: "2Gi"
    limits:
      cpu: "2000m"
      memory: "3Gi"

  volumeClaimTemplate:
    accessModes: ["ReadWriteOnce"]
    storageClassName: "pega-poc-es-local-storage"
    resources:
      requests:
        # configure volume size of the elasticsearch nodes based on search data storage requirements. The default storage size from elasticsearch is 30Gi.
        storage: 30Gi

  # elasticsearch.secretMounts will help reading certificates from elastic-certificates secret.
  secretMounts:
    - name: elastic-certificates
      secretName: elastic-certificates
      path: /usr/share/elasticsearch/config/certs

# For Openshift deployments, you must enable the following custom values. For details
# refer to https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/openshift.
#  securityContext:
#    runAsUser: null
#  podSecurityContext:
#    fsGroup: null
#    runAsUser: null
#  sysctlInitContainer:
#    enabled: false
  • 启动 SRS 服务
myserver@pega-minikube-poc:~/pega/srs$ helm install pega-poc-srs pega/backingservices -n pega-poc --values backingservices.yaml
NAME: pega-poc-srs
LAST DEPLOYED: Thu Feb 20 14:14:45 2025
NAMESPACE: pega-poc
STATUS: deployed
REVISION: 1
myserver@pega-minikube-poc:~/pega$ kubectl get pod,pv,pvc -n pega-poc
NAME                                READY   STATUS    RESTARTS   AGE
pod/elasticsearch-master-0          1/1     Running   0          13m
pod/pega-poc-srs-794cdfb5db-s4hzn   1/1     Running   0          13m

NAME                              CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                  STORAGECLASS                VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pega-poc-es-pv   30Gi       RWO            Retain           Bound    pega-poc/elasticsearch-master-elasticsearch-master-0   pega-poc-es-local-storage   <unset>                          13m

NAME                                                                STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS                VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0   Bound    pega-poc-es-pv   30Gi       RWO            pega-poc-es-local-storage   <unset>                 13m

Issue 1  持久化绑定的本地路径不存在,进入 minikube 进行创建,实际应用环境不推荐使用本地路径

  Normal   Scheduled    3m45s                default-scheduler  Successfully assigned pega-poc/elasticsearch-master-0 to minikube
  Warning  FailedMount  97s (x9 over 3m45s)  kubelet            MountVolume.NewMounter initialization failed for volume "pega-poc-es-pv" : path "/home/myserver/pega/srs/es/pv/pv1" does not exist

解决方案: 进入minikube 集群手动创建该路径


myserver@pega-minikube-poc:~/pega/srs$ minikube ssh
docker@minikube:~$ sudo mkdir -p /home/myserver/pega/srs/es/pv/pv1

该数据将存储在 minikube 的docker 容器中,一旦移除 minikube 容器,则会造成数据丢失,请谨慎使用。

myserver@pega-minikube-poc:~$ docker container ls -a | grep minikube
58f9f35ddc3f   kicbase/stable:v0.0.46          "/usr/local/bin/entr…"   4 hours ago    Up 3 hours    127.0.0.1:32818->22/tcp, 127.0.0.1:32819->2376/tcp, 127.0.0.1:32820->5000/tcp, 127.0.0.1:32821->8443/tcp, 127.0.0.1:32822->32443/tcp   minikube
myserver@pega-minikube-poc:~$ docker exec -it minikube bash
root@minikube:/# ls home/myserver/pega/srs/es/pv/pv1/nodes/0/
_state  indices  node.lock  snapshot_cache

第七步: 修改values-minimal.yaml配置文件,部署 PEGA 节点

 ---
global:
  # This values.yaml file is an example of a minimal Pega
  # deployment configuration.  For more information about
  # configuration options, see the project readme.

  # Enter your Kubernetes provider.
  provider: "k8s"

  # Enter a name for the deployment if using multi-tenant services such as the Search and Reporting Service.
  customerDeploymentId:

  # Deploy Pega nodes
  actions:
    execute: "deploy"
  # Add custom certificates to be mounted to container
  # to support custom certificates as plain text (less secure), pass them directly using the certificates parameter;
  # to support multiple custom certificates as external secrets, specify each of your external secrets
  # as an array of comma-separated strings using the certificatesSecrets parameter.
  certificatesSecrets: []
  certificates:

  # Add krb5.conf file content here.
  # Feature is used for Decisioning data flows to fetch data from Kafka or HBase streams
  kerberos: {}

  # Set to true to comply with NIST SP 800-53 and NIST SP 800-131.
  highlySecureCryptoModeEnabled: false

  # If a storage class to be passed to the VolumeClaimTemplates in search and stream pods, it can be specified here:
  storageClassName: ""
  # Provide JDBC connection information to the Pega relational database
  #   If you are installing or upgrading on IBM DB2, update the udb.conf file in the /charts/pega/charts/installer/config/udb directory with any additional connection properties.
  jdbc:
    #   url     Valid values are:
    #
    #   Oracle              jdbc:oracle:thin:@//localhost:1521/dbName
    #   IBM DB/2 z / OS         jdbc:db2://localhost:50000/dbName
    #   IBM DB/2            jdbc:db2://localhost:50000/dbName:fullyMaterializeLobData=true;fullyMaterializeInputStreams=true;
    #                       progressiveStreaming=2;useJDBC4ColumnNameAndLabelSemantics=2;
    #   SQL Server          jdbc:sqlserver://localhost:1433;databaseName=dbName;selectMethod=cursor;sendStringParametersAsUnicode=false
    #   PostgreSQL          jdbc:postgresql://localhost:5432/dbName
    url: "jdbc:postgresql://192.168.49.1:30054/postgres"
    #   driverClass     -- jdbc class.  Valid values are:
    #
    #   Oracle              oracle.jdbc.OracleDriver
    #   IBM DB/2            com.ibm.db2.jcc.DB2Driver
    #   SQL Server          com.microsoft.sqlserver.jdbc.SQLServerDriver
    #   PostgreSQL          org.postgresql.Driver
    driverClass: "org.postgresql.Driver"
    #   pega.database.type      Valid values are: mssql, oracledate, udb, db2zos, postgres
    dbType: "postgres"
    #   For databases that use multiple JDBC driver files (such as DB2), specify comma separated values for 'driverUri'
    driverUri: "https://jdbc.postgresql.org/download/postgresql-42.7.3.jar"
    username: "postgres"
    password: "postgres"
    # To avoid exposing username & password, leave the jdbc.password & jdbc.username parameters empty (no quotes),
    # configure JDBC username & password parameters in the External Secrets Manager, and enter the external secret for the credentials
    # make sure the keys in the secret should be DB_USERNAME and DB_PASSWORD respectively
    external_secret_name: ""
    # CUSTOM CONNECTION PROPERTIES
    # Add a list of ; delimited connections properties. The list must end with ;
    # For example: connectionProperties=user=usr;password=pwd;
    connectionProperties: ""
    rulesSchema: "pega_rules"
    dataSchema: "pega_data"
    customerDataSchema: "pega_cust_data"

  customArtifactory:
    # If you use a secured custom artifactory to manager your JDBC driver,
    # provide the authentication details below by filling in the appropriate authentication section,
    # either basic or apiKey.
    authentication:
      # Provide the basic authentication credentials or the API key authentication details to satisfy your custom artifactory authentication mechanism.
      basic:
        username: ""
        password: ""
      apiKey:
        headerName: ""
        value: ""
      # To avoid exposing basic.username,basic.password,apiKey.headerName,apiKey.value parameters, configure the
      # basic.username,basic.password,apiKey.headerName,apiKey.value parameters in External Secrets Manager, and enter the external secret for the credentials
      # make sure the keys in the secret should be CUSTOM_ARTIFACTORY_USERNAME , CUSTOM_ARTIFACTORY_PASSWORD , CUSTOM_ARTIFACTORY_APIKEY_HEADER , CUSTOM_ARTIFACTORY_APIKEY
      external_secret_name: ""
    # Leave customArtifactory.enableSSLVerification enabled to ensure secure access to your custom artifactory;
    # when customArtifactory.enableSSLVerification is false, SSL verification is skipped and establishes an insecure connection.
    enableSSLVerification: true
    # Provide a required domain certificate for your custom artifactory; if none is required, leave this field blank.
    certificate:

  docker:
    # If using a custom Docker registry, supply the credentials here to pull Docker images.
    registry:
      url: "pega-docker.downloads.pega.com"
      username: "pega_provide_UserID"
      password: "pega_provide_APIKey"
    # To avoid exposing Docker registry details, create secrets to manage your Docker registry credentials.
    # Specify secret names as an array of comma-separated strings in double quotation marks using the imagePullSecretNames parameter. For example: ["secret1", "secret2"]
    imagePullSecretNames: []
    # Docker image information for the Pega docker image, containing the application server.
    pega:
      image: "platform/pega:8.23.1"

  utilityImages:
    busybox:
      image: busybox:1.31.0
      imagePullPolicy: IfNotPresent
    k8s_wait_for:
      image: pegasystems/k8s-wait-for
      imagePullPolicy: "IfNotPresent"
      # waitTimeSeconds: 2
      # maxRetries: 1

  pegaDiagnosticUser: ""
  pegaDiagnosticPassword: ""

  # Specify the Pega tiers to deploy
  # For a minimal deployment, use a single tier to reduce resource consumption.
  # Note: The nodeType Stream is not supported, enable externalized Kafka service instead.
  # configuration under External Services
  tier:
    - name: "minikube"
      nodeType: "BackgroundProcessing,WebUser"

      service:
        httpEnabled: true
        port: 80
        targetPort: 8080
        # Without a load balancer, use a direct NodePort instead.
        serviceType: "NodePort"
        # To configure TLS between the ingress/load balancer and the backend, set the following:
        tls:
          enabled: false
          # To avoid entering the certificate values in plain text, configure the keystore, keystorepassword, cacertificate parameter
          # values in the External Secrets Manager, and enter the external secret name below
          # make sure the keys in the secret should be TOMCAT_KEYSTORE_CONTENT, TOMCAT_KEYSTORE_PASSWORD and ca.crt respectively
          external_secret_name: ""
          keystore:
          keystorepassword:
          port: 443
          targetPort: 8443
          # set the value of CA certificate here in case of baremetal/openshift deployments - CA certificate should be in base64 format
          # pass the certificateChainFile file if you are using certificateFile and certificateKeyFile
          cacertificate:
          # provide the SSL certificate and private key as a PEM format
          certificateFile:
          certificateKeyFile:
          # if you will deploy traefik addon chart and enable traefik, set enabled=true; otherwise leave the default setting.
          traefik:
            enabled: false
            # the SAN of the certificate present inside the container
            serverName: ""
            # set insecureSkipVerify=true, if the certificate verification has to be skipped
            insecureSkipVerify: false

      ingress:
        # Enter the domain name to access web nodes via a load balancer.
        #  e.g. web.mypega.example.com
        domain: "YOUR_MINIKUBE_NODE_DOMAIN"
        # Configure custom path for given host along with pathType. Default pathType is ImplementationSpecific.
        # path:
        # pathType:
        tls:
          # Enable TLS encryption
          enabled: false
          # For Openshift, Pega deployments enable TLS to secure the connection
          # from the browser to the router by creating the route using reencrypt termination policy.
          # Add your certificate, the corresponding key using the appropriate .pem or .crt format and
          # specify a CA certificate to validate the endpoint certificate.
          certificate:
          key:
          cacertificate:

      # Set resource consumption to minimal levels
      replicas: 1
      javaOpts: ""
      initialHeap: "4096m"
      maxHeap: "4096m"
      resources:
        requests:
          memory: "6Gi"
          cpu: 200m
        limits:
          memory: "6Gi"
          cpu: 2
      volumeClaimTemplate:
        resources:
          requests:
            storage: 5Gi

# External services

# Cassandra automatic deployment settings.
# Disabled by default for minimal deployments.
cassandra:
  enabled: false

# DDS (external Cassandra) connection settings.
# These settings should only be modified if you are using a custom Cassandra deployment.
#dds:
  # A comma separated list of hosts in the Cassandra cluster.
#  externalNodes: ""
  # TCP Port to connect to cassandra.
#  port: "9042"
  # The username for authentication with the Cassandra cluster.
#  username: "dnode_ext"
  # The password for authentication with the Cassandra cluster.
#  password: "dnode_ext"
  # To avoid exposing username,password,trustStorePassword,keyStorePassword parameters, configure the
  # username,password,trustStorePassword,keyStorePassword parameters in External Secrets Manager, and enter the external secret for the credentials
  # make sure the keys in the secret should be CASSANDRA_USERNAME, CASSANDRA_PASSWORD , CASSANDRA_TRUSTSTORE_PASSWORD , CASSANDRA_KEYSTORE_PASSWORD
#  external_secret_name: ""

# Elasticsearch deployment settings.
# Note: This Elasticsearch deployment is used for Pega search, and is not the same Elasticsearch deployment used by the EFK stack.
# These search nodes will be deployed regardless of the Elasticsearch configuration above.
pegasearch:
  image: "pegasystems/search"
  memLimit: "3Gi"

  # Set externalSearchService to true to use the Search and Reporting Service.
  # Refer to the README document to configure SRS as a search functionality provider under this section.
  externalSearchService: true
  externalURL: "pega-poc-srs.pega-poc"
  srsAuth:
    enabled: false
    url: ""
    clientId: ""
    authType: ""
    privateKey: ""
    external_secret_name: ""

# Pega Installer settings
installer:
  image: "platform/installer:8.23.1"
  adminPassword: "install"

# Hazelcast settings (applicable from Pega 8.6)
hazelcast:
  # Hazelcast docker image for platform version 8.6 through 8.7.x
  image: "YOUR_HAZELCAST_IMAGE:TAG"
  # Hazelcast docker image for platform version 8.8 and later
  clusteringServiceImage: "platform/clustering-service:1.3.9"

  # Setting below to true will deploy Pega Platform using a client-server Hazelcast model for version 8.6 through 8.7.x.
  # Note: Make sure to set this value as "false" in case of Pega Platform version before "8.6". If not set this will fail the installation.
  enabled: false

  # Setting below to true will deploy Pega Platform using a client-server Hazelcast model for version 8.8 and later.
  clusteringServiceEnabled: true
  # Set to true to enforce SSL communication between the Clustering Service and Pega Platform.
  encryption:
    enabled: false
  # Setting related to Hazelcast migration.
  migration:
    # Set to `true` to initiate the migration job.
    initiateMigration: false
    # Reference the `platform/clustering-service-kubectl` Docker image to create the migration job.
    migrationJobImage: "YOUR_MIGRATION_JOB_IMAGE:TAG"
    # Set to `true` when migrating from embedded Hazelcast.
    embeddedToCSMigration: false

  # No. of initial members to join
  replicas: 1
  # UserName in the client-server Hazelcast model authentication. This setting is exposed and not secure.
  username: "hazelcast_user"
  # Password in the client-server Hazelcast model authentication. This setting is exposed and not secure.
  password: "hazelcast_password"
  # To avoid exposing username and password parameters, leave these parameters empty and configure
  # these cluster settings using an External Secrets Manager. Use the following keys in the secret:
  # HZ_CS_AUTH_USERNAME for username and HZ_CS_AUTH_PASSWORD for password.
  # Enter the external secret for these credentials below.
  external_secret_name: ""

# Stream (externalized Kafka service) settings.
stream:
  # Beginning with Pega Platform '23, enabled by default; when disabled, your deployment does not use a"Kafka stream service" configuration.
  enabled: true
  # Provide externalized Kafka service broker urls.
  bootstrapServer: "http://192.168.49.1:30053"
  # Provide Security Protocol used to communicate with kafka brokers. Supported values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
  securityProtocol: PLAINTEXT
  # If required, provide trustStore certificate file name
  # When using a trustStore certificate, you must also include a Kubernetes secret name, that contains the trustStore certificate,
  # in the global.certificatesSecrets parameter.
  # Pega deployments only support trustStores using the Java Key Store (.jks) format.
  trustStore: ""
  # If required provide trustStorePassword value in plain text.
  trustStorePassword: ""
  # If required, provide keyStore certificate file name
  # When using a keyStore certificate, you must also include a Kubernetes secret name, that contains the keyStore certificate,
  # in the global.certificatesSecrets parameter.
  # Pega deployments only support keyStores using the Java Key Store (.jks) format.
  keyStore: ""
  # If required, provide keyStore value in plain text.
  keyStorePassword: ""
  # If required, provide jaasConfig value in plain text.
  jaasConfig: ""
  # If required, provide a SASL mechanism**. Supported values are: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512.
  saslMechanism: PLAIN
  # By default, topics originating from Pega Platform have the pega- prefix,
  # so that it is easy to distinguish them from topics created by other applications.
  # Pega supports customizing the name pattern for your Externalized Kafka configuration for each deployment.
  streamNamePattern: "pega-{stream.name}"
  # Your replicationFactor value cannot be more than the number of Kafka brokers.Pega recommended value is 3.
  replicationFactor: "1"
  # To avoid exposing trustStorePassword, keyStorePassword, and jaasConfig parameters, leave the values empty and
  # configure them using an External Secrets Manager, making sure you configure the keys in the secret in the order:
  # STREAM_TRUSTSTORE_PASSWORD, STREAM_KEYSTORE_PASSWORD and STREAM_JAAS_CONFIG.
  # Enter the external secret name below.
  external_secret_name: ""

部署 pega web 等服务

myserver@pega-minikube-poc:~/pega/web$ helm install pega-poc-web  pega/pega -n pega-poc --values values-minimal.yaml 
NAME: pega-poc-web
LAST DEPLOYED: Thu Feb 20 15:05:03 2025
NAMESPACE: pega-poc
STATUS: deployed
REVISION: 1
TEST SUITE: None

查看部署结果

 myserver@pega-minikube-poc:~/pega/web$ kubectl get pod,pv,pvc,svc -n pega-poc
NAME                                READY   STATUS    RESTARTS      AGE
pod/clusteringservice-0             1/1     Running   0             15m
pod/elasticsearch-master-0          1/1     Running   0             32m
pod/pega-minikube-0                 1/1     Running   0   			15m
pod/pega-poc-srs-6c5c6c5fcc-w5g77   1/1     Running   0             32m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                  STORAGECLASS                VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pega-poc-es-pv                             30Gi       RWO            Retain           Bound    pega-poc/elasticsearch-master-elasticsearch-master-0   pega-poc-es-local-storage   <unset>                          36m
persistentvolume/pvc-2ea9c225-ebf6-457f-a605-394d1455debc   5Gi        RWO            Delete           Bound    pega-poc/pega-minikube-pega-minikube-0                 standard                    <unset>                          29m

NAME                                                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0   Bound    pega-poc-es-pv                             30Gi       RWO            pega-poc-es-local-storage   <unset>                 35m
persistentvolumeclaim/pega-minikube-pega-minikube-0                 Bound    pvc-2ea9c225-ebf6-457f-a605-394d1455debc   5Gi        RWO            standard                    <unset>                 29m

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/clusteringservice-service       ClusterIP   None             <none>        5701/TCP            15m
service/elasticsearch-master            ClusterIP   10.97.107.212    <none>        9200/TCP,9300/TCP   32m
service/elasticsearch-master-headless   ClusterIP   None             <none>        9200/TCP,9300/TCP   32m
service/pega-minikube                   NodePort    10.107.117.217   <none>        80:32743/TCP        15m
service/pega-poc-srs                    ClusterIP   10.99.127.129    <none>        8080/TCP,80/TCP     32m

第八步: 根据 pega-minikube service 暴露的端口号登录 PEGA

  • 获取 minikube ip
myserver@pega-minikube-poc:~/pega/web$ minikube ip
192.168.49.2
  • 通过 http://192.168.49.2:32743/prweb 访问登录页面(由于端口暴露在 minikube 对应的容器内部,因此需要使用 minikube 的 ip 地址进行访问)

输入安装时设置的管理员密码登录,提示修改密码

修改后进入首页,默认是创建新的Application

也可直接切换至 dev studio 查看其它信息,例如 PEGA 当前版本信息。


以上便是本文的全部内容,感谢您的阅读。



评论