Skip to content

[cozystack-scheduler] Publish SchedulingClass demo#2252

Open
lllamnyp wants to merge 1 commit intomainfrom
docs/schedulingclass-demo
Open

[cozystack-scheduler] Publish SchedulingClass demo#2252
lllamnyp wants to merge 1 commit intomainfrom
docs/schedulingclass-demo

Conversation

@lllamnyp
Copy link
Copy Markdown
Member

@lllamnyp lllamnyp commented Mar 20, 2026

What this PR does

Publish the scripts used for running the Cozystack scheduler demo at the community meetup on 19.03.2026.

Release note

[cozystack-scheduler] Add demo scripts to the cozystack-scheduler system
packages.

Summary by CodeRabbit

  • New Features

    • Added comprehensive interactive demo scripts showcasing scheduling class capabilities, including colocating workloads on single nodes, spreading pods evenly across nodes, enforcing one-pod-per-node policies, and pinning pods to specific nodes.
  • Documentation

    • Introduced a shared helper script with logging utilities and Kubernetes polling functions to support the demo workflow and interactive user guidance throughout each scheduling demonstration.

## What this PR does

Publish the scripts used for running the Cozystack scheduler demo at the
community meetup on 19.03.2026.

### Release note

```release-note
[cozystack-scheduler] Add demo scripts to the cozystack-scheduler system
packages.
```
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. enhancement New feature or request labels Mar 20, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request publishes a series of executable demo scripts designed to illustrate the capabilities of the Cozystack scheduler's SchedulingClass feature. These scripts provide practical examples of how to configure and observe different pod scheduling behaviors, such as colocation, anti-affinity, even spreading, and node pinning, using custom resources within a Kubernetes environment. The addition of these demos aims to provide clear, runnable examples for understanding and utilizing advanced scheduling policies.

Highlights

  • SchedulingClass Demos: Introduced a comprehensive set of demo scripts to showcase various functionalities of Cozystack's SchedulingClass feature, including node affinity, anti-affinity, and topology spread constraints.
  • Helper Functions: Added a utility script (00-helpers.sh) providing standardized logging, waiting, and display functions for Kubernetes resources, enhancing the readability and reusability of the demo scripts.
  • Tenant and Redis Demonstrations: Included scripts to create tenants linked to specific scheduling classes and deploy Redis instances to illustrate how different SchedulingClass configurations affect pod placement and behavior.
  • Cleanup Scripts: Provided dedicated cleanup scripts for each demo scenario, as well as a comprehensive teardown script, ensuring easy management and removal of demo resources.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 20, 2026

📝 Walkthrough

Walkthrough

Added a complete interactive demo workflow with 13 Bash scripts for Kubernetes SchedulingClass patterns. The demo includes a shared helper library with logging and kubectl utilities, setup scripts for scheduling classes and tenants, and sequential demonstrations of four scheduling strategies (colocate, one-per-node, spread-evenly, pin-to-node2) using Redis deployments, plus cleanup and teardown scripts.

Changes

Cohort / File(s) Summary
Shared Demo Infrastructure
packages/system/cozystack-scheduler/examples/demo/00-helpers.sh
New helper script with ANSI color variables, logging functions (log_info, log_success, log_error, etc.), and Kubernetes utilities (wait_for_pods, wait_for_redis_ready, show_pods).
Demo Initialization
packages/system/cozystack-scheduler/examples/demo/01-scheduling-classes.sh, 02-create-tenants.sh
Setup scripts creating four SchedulingClass resources (pin-to-node2, one-per-node, spread-evenly, colocate) and four corresponding Tenant resources tied to each scheduling class.
Colocate Demo
packages/system/cozystack-scheduler/examples/demo/03-colocate-demo.sh, 04-colocate-second-redis.sh, 05-colocate-cleanup.sh
Sequential demo scripts demonstrating pod colocating via affinity: creates Redis instances, scales replicas, shows pod placement on single node, and cleans up resources.
One-Per-Node Demo
packages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.sh, 07-onepernode-second-redis.sh, 08-onepernode-cleanup.sh
Demo scripts showcasing anti-affinity constraints across nodes: creates Redis instances with sentinel patching, scales to trigger pending pods due to constraints, verifies placement, and cleanup.
Spread Demo
packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh, 10-spread-cleanup.sh
Demo demonstrating topology spread constraints: creates single Redis, scales to 9 replicas with per-node count summary, verifies even distribution across nodes.
Pin-to-Node2 Demo
packages/system/cozystack-scheduler/examples/demo/11-node2-demo.sh, 12-node2-cleanup.sh
Demo orchestrating node selector constraints: creates two Redis instances pinned to node2 via nodeSelector, verifies placement, and cleanup.
Teardown
packages/system/cozystack-scheduler/examples/demo/13-teardown.sh
Final cleanup script deleting all demo Tenant and SchedulingClass resources.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hop-ping through clusters with scheduling flair,
Demos that dance where the pods declare—
Colocate, spread, and anti-affinity schemes,
Thirteen scripts bringing scheduler dreams!
With helpers so handy and steps crystal clear,
The demo's a winner— let's give a cheer!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 38.46% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: publishing a demo for SchedulingClass, which matches the addition of 13 demo scripts for the cozystack-scheduler.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch docs/schedulingclass-demo
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive set of demo scripts for the Cozystack scheduler. The scripts are well-structured, numbered for sequential execution, and make good use of a shared helper script for logging and common functions, which enhances readability and maintainability. The interactive nature of the scripts, with pauses and clear step-by-step logging, is excellent for a demonstration.

My review focuses on improving the robustness of the scripts and fixing a minor bug. Specifically, I've suggested replacing fixed-time sleep calls with polling loops to make the scripts more reliable on clusters with varying performance. I also found a discrepancy between a logged command and the one being executed, which could cause confusion during the demo.


log_step "Patching RedisFailover to use 2 sentinels instead of 3"
log_info "With 3 nodes and anti-affinity, 1 storage + 3 sentinels = 4 pods won't fit."
log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logged command is different from the command that is actually executed on line 51. The logged command uses rf $REDIS_NAME, while the executed command uses rf redis-"$REDIS_NAME". This can be confusing for someone running the demo. The logged command should accurately reflect what is being executed.

Suggested change
log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
log_command "kubectl patch rf redis-$REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"

Comment on lines +72 to +76
if [[ -n "$label" ]]; then
count=$(kubectl get pods -n "$namespace" -l "$label" --no-headers 2>/dev/null | wc -l)
else
count=$(kubectl get pods -n "$namespace" --no-headers 2>/dev/null | wc -l)
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The kubectl get pods command is repeated within the if/else block, with the only difference being the -l "$label" argument. This can be refactored to avoid code duplication and improve maintainability by constructing the arguments array dynamically. For example:

local args=("-n" "$namespace" "--no-headers")
if [[ -n "$label" ]]; then
    args+=("-l" "$label")
fi
count=$(kubectl get pods "${args[@]}" 2>/dev/null | wc -l)

This would replace the entire if/else block.

log_success "Redis $REDIS_NAME created"

log_info "Waiting for RedisFailover to appear..."
sleep 10
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a fixed sleep 10 to wait for the RedisFailover resource to be created is brittle and can lead to script failures on slower clusters. It's better to use a polling loop to wait for the resource to become available. For example:

until kubectl get rf "redis-$REDIS_NAME" -n "$NAMESPACE" &> /dev/null; do
    log_info "Waiting for RedisFailover to appear..."
    sleep 2
done

log_success "Redis $REDIS_NAME created"

log_info "Waiting for RedisFailover to appear..."
sleep 10
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using a fixed sleep 10 to wait for the RedisFailover resource to be created is brittle and can lead to script failures on slower clusters. It's better to use a polling loop to wait for the resource to become available. For example:

until kubectl get rf "redis-$REDIS_NAME" -n "$NAMESPACE" &> /dev/null; do
    log_info "Waiting for RedisFailover to appear..."
    sleep 2
done

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
packages/system/cozystack-scheduler/examples/demo/00-helpers.sh (3)

91-97: Unusual stderr redirection in show_pods.

Line 95 uses 2>&1 >&2 which redirects stderr to the original stdout, then stdout to stderr. The intent seems to be sending all output to stderr, but 2>&1 captures stderr before stdout is redirected. This effectively sends stdout to stderr and stderr to the original stdout (likely also stderr in this context, so it works by coincidence).

A clearer approach:

♻️ Proposed simplification
 show_pods() {
     local namespace="$1"
     echo "" >&2
-    kubectl get pods -n "$namespace" -o wide 2>&1 >&2
+    kubectl get pods -n "$namespace" -o wide >&2 2>&1
     echo "" >&2
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh` around lines
91 - 97, The show_pods function uses the confusing redirection "2>&1 >&2" for
the kubectl get pods call; replace that with a single clear redirect so both
stdout and stderr go to stderr (e.g., change the kubectl get pods -n
"$namespace" -o wide 2>&1 >&2 invocation to a single redirection such as kubectl
get pods -n "$namespace" -o wide >&2) to ensure all output is sent to stderr;
update the kubectl line inside the show_pods function accordingly.

62-89: Consider checking pod readiness, not just existence.

wait_for_pods only waits for pods to appear (count > 0), but doesn't verify they're actually running or ready. For demo purposes this may be intentional (you want to show the scheduling quickly), but the function name and comment suggest it waits for pods to "stabilize."

If stabilization is desired, consider adding a readiness check:

♻️ Optional enhancement for readiness check
     while true; do
         local count
         if [[ -n "$label" ]]; then
-            count=$(kubectl get pods -n "$namespace" -l "$label" --no-headers 2>/dev/null | wc -l)
+            count=$(kubectl get pods -n "$namespace" -l "$label" --field-selector=status.phase=Running --no-headers 2>/dev/null | wc -l)
         else
-            count=$(kubectl get pods -n "$namespace" --no-headers 2>/dev/null | wc -l)
+            count=$(kubectl get pods -n "$namespace" --field-selector=status.phase=Running --no-headers 2>/dev/null | wc -l)
         fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh` around lines
62 - 89, wait_for_pods currently only checks pod existence (count > 0) and not
readiness; change it to wait for pods to be Ready/stable by using kubectl wait
--for=condition=ready (or polling kubectl get pods -n "$namespace" -l "$label"
-o jsonpath='{range .items[*]}{.metadata.name}:{.status.phase}:{range
.status.conditions[*]}{.type}={.status};{end}{"\n"}{end}') and ensure all
matching pods report Ready=true (or .status.phase==Running) before breaking;
keep the same parameters (namespace, label, timeout) and update log messages
(log_info/log_warning) and return codes so the function returns success only
when pods are ready or returns 1 on timeout.

99-121: Use condition type filter instead of array index to avoid fragile status checks.

The jsonpath .status.conditions[0].status assumes the first condition is always the "Ready" condition. Kubernetes resources typically have multiple conditions (e.g., Initialized, Ready, ContainersReady), and order is not guaranteed. Filter by condition type instead:

♻️ Proposed fix to query by condition type
     while true; do
         local phase
-        phase=$(kubectl get redis "$name" -n "$namespace" -o jsonpath='{.status.conditions[0].status}' 2>/dev/null || echo "")
+        phase=$(kubectl get redis "$name" -n "$namespace" -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' 2>/dev/null || echo "")
         if [[ "$phase" == "True" ]]; then
             log_success "Redis $name is ready"
             break
         fi

This pattern aligns with how conditions are queried throughout the codebase (dashboard, packagesource types, and other operators).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh` around lines
99 - 121, The wait_for_redis_ready function uses the fragile jsonpath
.status.conditions[0].status; change the kubectl jsonpath to select the
condition by type (e.g. .status.conditions[?(@.type=="Ready")].status) so you
explicitly read the Ready condition, keep the same variable name phase and the
same truth check, and preserve the timeout/sleep logic and fallback to empty
string on kubectl errors.
packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh (1)

67-69: Consider: Awk column extraction assumes fixed kubectl output format.

The awk '{print $7}' assumes the NODE column is always the 7th field in kubectl get pods -o wide output. While this is standard, using -o custom-columns or -o jsonpath would be more robust.

For a demo script, this is acceptable, but for production tooling, consider:

kubectl get pods -n "$NAMESPACE" -o jsonpath='{range .items[*]}{.spec.nodeName}{"\n"}{end}' | sort | uniq -c | sort -rn
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh` around
lines 67 - 69, Replace the brittle awk column extraction used in the "kubectl
get pods -n \"$NAMESPACE\" -o wide" pipeline (the awk '{print $7}' reference)
with a robust jsonpath-based extraction of the pod's node name (use kubectl -o
jsonpath to read .spec.nodeName for each item), then pipe the results to sort |
uniq -c | sort -rn so the log_step "Pod count per node:" output no longer
depends on column positions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.sh`:
- Around line 49-51: The log message does not match the executed command: update
the call to log_command so it logs the exact command being run (the one using
redis-"$REDIS_NAME") or alternatively change the executed command to match the
logged form; specifically adjust the invocation around the REDIS_NAME usage so
the logged string and the actual kubectl invocation are identical (look for
log_command and the subsequent kubectl patch rf redis-"$REDIS_NAME" line and
make both use the same resource name form).

---

Nitpick comments:
In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh`:
- Around line 91-97: The show_pods function uses the confusing redirection "2>&1
>&2" for the kubectl get pods call; replace that with a single clear redirect so
both stdout and stderr go to stderr (e.g., change the kubectl get pods -n
"$namespace" -o wide 2>&1 >&2 invocation to a single redirection such as kubectl
get pods -n "$namespace" -o wide >&2) to ensure all output is sent to stderr;
update the kubectl line inside the show_pods function accordingly.
- Around line 62-89: wait_for_pods currently only checks pod existence (count >
0) and not readiness; change it to wait for pods to be Ready/stable by using
kubectl wait --for=condition=ready (or polling kubectl get pods -n "$namespace"
-l "$label" -o jsonpath='{range
.items[*]}{.metadata.name}:{.status.phase}:{range
.status.conditions[*]}{.type}={.status};{end}{"\n"}{end}') and ensure all
matching pods report Ready=true (or .status.phase==Running) before breaking;
keep the same parameters (namespace, label, timeout) and update log messages
(log_info/log_warning) and return codes so the function returns success only
when pods are ready or returns 1 on timeout.
- Around line 99-121: The wait_for_redis_ready function uses the fragile
jsonpath .status.conditions[0].status; change the kubectl jsonpath to select the
condition by type (e.g. .status.conditions[?(@.type=="Ready")].status) so you
explicitly read the Ready condition, keep the same variable name phase and the
same truth check, and preserve the timeout/sleep logic and fallback to empty
string on kubectl errors.

In `@packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh`:
- Around line 67-69: Replace the brittle awk column extraction used in the
"kubectl get pods -n \"$NAMESPACE\" -o wide" pipeline (the awk '{print $7}'
reference) with a robust jsonpath-based extraction of the pod's node name (use
kubectl -o jsonpath to read .spec.nodeName for each item), then pipe the results
to sort | uniq -c | sort -rn so the log_step "Pod count per node:" output no
longer depends on column positions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e0252ae8-7187-43ea-aaa5-f586d23a6fd3

📥 Commits

Reviewing files that changed from the base of the PR and between 4bdc724 and 482255a.

📒 Files selected for processing (14)
  • packages/system/cozystack-scheduler/examples/demo/00-helpers.sh
  • packages/system/cozystack-scheduler/examples/demo/01-scheduling-classes.sh
  • packages/system/cozystack-scheduler/examples/demo/02-create-tenants.sh
  • packages/system/cozystack-scheduler/examples/demo/03-colocate-demo.sh
  • packages/system/cozystack-scheduler/examples/demo/04-colocate-second-redis.sh
  • packages/system/cozystack-scheduler/examples/demo/05-colocate-cleanup.sh
  • packages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.sh
  • packages/system/cozystack-scheduler/examples/demo/07-onepernode-second-redis.sh
  • packages/system/cozystack-scheduler/examples/demo/08-onepernode-cleanup.sh
  • packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh
  • packages/system/cozystack-scheduler/examples/demo/10-spread-cleanup.sh
  • packages/system/cozystack-scheduler/examples/demo/11-node2-demo.sh
  • packages/system/cozystack-scheduler/examples/demo/12-node2-cleanup.sh
  • packages/system/cozystack-scheduler/examples/demo/13-teardown.sh

Comment on lines +49 to +51
log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"

kubectl patch rf redis-"$REDIS_NAME" -n "$NAMESPACE" --type=merge -p '{"spec":{"sentinel":{"replicas":2}}}'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Log message doesn't match actual command executed.

The log_command displays kubectl patch rf $REDIS_NAME (which would show redis-demo), but the actual command patches rf redis-"$REDIS_NAME" (which is redis-redis-demo). This creates a misleading demo output.

🔧 Proposed fix to align logged and executed commands
-log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
+log_command "kubectl patch rf redis-$REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
kubectl patch rf redis-"$REDIS_NAME" -n "$NAMESPACE" --type=merge -p '{"spec":{"sentinel":{"replicas":2}}}'
log_command "kubectl patch rf redis-$REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
kubectl patch rf redis-"$REDIS_NAME" -n "$NAMESPACE" --type=merge -p '{"spec":{"sentinel":{"replicas":2}}}'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.sh`
around lines 49 - 51, The log message does not match the executed command:
update the call to log_command so it logs the exact command being run (the one
using redis-"$REDIS_NAME") or alternatively change the executed command to match
the logged form; specifically adjust the invocation around the REDIS_NAME usage
so the logged string and the actual kubectl invocation are identical (look for
log_command and the subsequent kubectl patch rf redis-"$REDIS_NAME" line and
make both use the same resource name form).

@kvaps
Copy link
Copy Markdown
Member

kvaps commented Mar 20, 2026

Hey @lllamnyp could you fix DCO please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants