We understand that different enterprises have different policies for Kubernetes clusters and hence we provide a rich set of customizations for our Deepfactor K8s portal installation. You can specify an override.yaml file while deploying our helm charts in your cluster. We provide some common customization scenarios below
Change storage class #
The default value for the storage class is set to empty which means Kubernetes will use the default Storage Class of the cluster to provision the volumes. You can use a different storage class by setting the storageClassName value in override.yaml as shown below.
storage: storageClassName: <storage-class-name>
Update volume size #
We create 4 volumes for our StatefulSets and the default values for them are as follows:
- Postgres – 100Gi
- Clickhouse – 300Gi
- ArchiveStore – 100Gi
- SymbolService – 5Gi
You can change the volume sizes by setting the following values in override.yaml
postgres: storage: requests: <volume-size-for-postgres> clickhouse: storage: requests: <volume-size-for-clickhouse> archivestore: storage: requests: <volume-size-for-archivestore> symbolsvc: storage: requests: <volume-size-for-symbolsvc>
If you want to update the size of a particular volume, then just add the section related to the volume. For example, if you want update the volume size of Postgres to 10Gi, you can set the following values in override.yaml.
postgres: storage: requests: 10Gi
Use a private Image registry, image, tag and pull policy #
If you intend to pull Deepfactor container images from your private repository, then you can update the image registry, name and tag for each service in override.yaml as shown below.
# Make sure that registry name end with a '/'. # For example : quay.io/ is a correct value here and quay.io is incorrect deepfactorImageRegistry: <registry_name_with_a_slash_at_end> apisvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> alertsvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> authsvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> cvesvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> depchecksvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> eventsvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> notificationsvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> proxysvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> symbolsvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> statussvc: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> nginx: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> dfwebscan: webScanDriverImageName: "<image_name_with_repository>:<image_tag>" webScanInitImageName: "<image_name_with_repository>:<image_tag>" image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> archivestore: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> clickhouse: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy> postgres: image: tag: <image_tag> repository: <image_name_with_repository> pullPolicy: <image_pull_policy>
Disable ingress deployment #
If you already have an ingress controller that you would like to use to expose Deepfactor service, then you can disable ingress deployment within Deepfactor. Please refer to the following document for detailed steps:
Deepfactor Portal Installation with Existing Ingress Controller
Changing ingress from deployment to daemonSet #
ingress-nginx: controller: kind: DaemonSet
Use external ip instead of LoadBalancer #
If your environment does not support LoadBalancer, then you can specify external ip to access Deepfactor portal. Please refer to the following document for detailed steps:
Deploying Deepfactor Portal using external IP
nodeSelector for pods #
If you would like to schedule Deepfactor pods on specific nodes, you can set the nodeSelector parameter for each Deepfactor deployment. For example, here we are scheduling Deepfactor ‘s api service on nodes that have the label “disktype=ssd”
apisvc: nodeSelector: disktype: ssd
For each of the Deepfactor microservices, you can override quite a few parameters. The default values for one of the deployments are shown below for your reference.
apisvc: replicas: 1 name: apisvc initContainer: name: init-dfapisvc image: registry: "" repository: public.ecr.aws/deepfactor/alpine pullPolicy: IfNotPresent tag: 3.13 image: repository: df/apisvc pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: 2.0-966 service: annotations: {} labels: {} clusterIP: "" enableMetrics: true externalIPs: [] loadBalancerIP: "" loadBalancerSourceRanges: [] servicePort: 8081 grpcPort: 8082 sessionAffinity: None type: ClusterIP ingress: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/client-body-buffer-size: 1M extraLabels: {} annotations: {} podAnnotations: {} podLabels: {} nodeSelector: {} tolerations: [] resources: {} securityContext: {}