[cozystack-scheduler] Publish SchedulingClass demo#2252
[cozystack-scheduler] Publish SchedulingClass demo#2252
Conversation
## What this PR does Publish the scripts used for running the Cozystack scheduler demo at the community meetup on 19.03.2026. ### Release note ```release-note [cozystack-scheduler] Add demo scripts to the cozystack-scheduler system packages. ```
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request publishes a series of executable demo scripts designed to illustrate the capabilities of the Cozystack scheduler's Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
📝 WalkthroughWalkthroughAdded a complete interactive demo workflow with 13 Bash scripts for Kubernetes SchedulingClass patterns. The demo includes a shared helper library with logging and kubectl utilities, setup scripts for scheduling classes and tenants, and sequential demonstrations of four scheduling strategies (colocate, one-per-node, spread-evenly, pin-to-node2) using Redis deployments, plus cleanup and teardown scripts. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive set of demo scripts for the Cozystack scheduler. The scripts are well-structured, numbered for sequential execution, and make good use of a shared helper script for logging and common functions, which enhances readability and maintainability. The interactive nature of the scripts, with pauses and clear step-by-step logging, is excellent for a demonstration.
My review focuses on improving the robustness of the scripts and fixing a minor bug. Specifically, I've suggested replacing fixed-time sleep calls with polling loops to make the scripts more reliable on clusters with varying performance. I also found a discrepancy between a logged command and the one being executed, which could cause confusion during the demo.
|
|
||
| log_step "Patching RedisFailover to use 2 sentinels instead of 3" | ||
| log_info "With 3 nodes and anti-affinity, 1 storage + 3 sentinels = 4 pods won't fit." | ||
| log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'" |
There was a problem hiding this comment.
The logged command is different from the command that is actually executed on line 51. The logged command uses rf $REDIS_NAME, while the executed command uses rf redis-"$REDIS_NAME". This can be confusing for someone running the demo. The logged command should accurately reflect what is being executed.
| log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'" | |
| log_command "kubectl patch rf redis-$REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'" |
| if [[ -n "$label" ]]; then | ||
| count=$(kubectl get pods -n "$namespace" -l "$label" --no-headers 2>/dev/null | wc -l) | ||
| else | ||
| count=$(kubectl get pods -n "$namespace" --no-headers 2>/dev/null | wc -l) | ||
| fi |
There was a problem hiding this comment.
The kubectl get pods command is repeated within the if/else block, with the only difference being the -l "$label" argument. This can be refactored to avoid code duplication and improve maintainability by constructing the arguments array dynamically. For example:
local args=("-n" "$namespace" "--no-headers")
if [[ -n "$label" ]]; then
args+=("-l" "$label")
fi
count=$(kubectl get pods "${args[@]}" 2>/dev/null | wc -l)This would replace the entire if/else block.
| log_success "Redis $REDIS_NAME created" | ||
|
|
||
| log_info "Waiting for RedisFailover to appear..." | ||
| sleep 10 |
There was a problem hiding this comment.
Using a fixed sleep 10 to wait for the RedisFailover resource to be created is brittle and can lead to script failures on slower clusters. It's better to use a polling loop to wait for the resource to become available. For example:
until kubectl get rf "redis-$REDIS_NAME" -n "$NAMESPACE" &> /dev/null; do
log_info "Waiting for RedisFailover to appear..."
sleep 2
done| log_success "Redis $REDIS_NAME created" | ||
|
|
||
| log_info "Waiting for RedisFailover to appear..." | ||
| sleep 10 |
There was a problem hiding this comment.
Using a fixed sleep 10 to wait for the RedisFailover resource to be created is brittle and can lead to script failures on slower clusters. It's better to use a polling loop to wait for the resource to become available. For example:
until kubectl get rf "redis-$REDIS_NAME" -n "$NAMESPACE" &> /dev/null; do
log_info "Waiting for RedisFailover to appear..."
sleep 2
doneThere was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
packages/system/cozystack-scheduler/examples/demo/00-helpers.sh (3)
91-97: Unusual stderr redirection in show_pods.Line 95 uses
2>&1 >&2which redirects stderr to the original stdout, then stdout to stderr. The intent seems to be sending all output to stderr, but2>&1captures stderr before stdout is redirected. This effectively sends stdout to stderr and stderr to the original stdout (likely also stderr in this context, so it works by coincidence).A clearer approach:
♻️ Proposed simplification
show_pods() { local namespace="$1" echo "" >&2 - kubectl get pods -n "$namespace" -o wide 2>&1 >&2 + kubectl get pods -n "$namespace" -o wide >&2 2>&1 echo "" >&2 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh` around lines 91 - 97, The show_pods function uses the confusing redirection "2>&1 >&2" for the kubectl get pods call; replace that with a single clear redirect so both stdout and stderr go to stderr (e.g., change the kubectl get pods -n "$namespace" -o wide 2>&1 >&2 invocation to a single redirection such as kubectl get pods -n "$namespace" -o wide >&2) to ensure all output is sent to stderr; update the kubectl line inside the show_pods function accordingly.
62-89: Consider checking pod readiness, not just existence.
wait_for_podsonly waits for pods to appear (count > 0), but doesn't verify they're actually running or ready. For demo purposes this may be intentional (you want to show the scheduling quickly), but the function name and comment suggest it waits for pods to "stabilize."If stabilization is desired, consider adding a readiness check:
♻️ Optional enhancement for readiness check
while true; do local count if [[ -n "$label" ]]; then - count=$(kubectl get pods -n "$namespace" -l "$label" --no-headers 2>/dev/null | wc -l) + count=$(kubectl get pods -n "$namespace" -l "$label" --field-selector=status.phase=Running --no-headers 2>/dev/null | wc -l) else - count=$(kubectl get pods -n "$namespace" --no-headers 2>/dev/null | wc -l) + count=$(kubectl get pods -n "$namespace" --field-selector=status.phase=Running --no-headers 2>/dev/null | wc -l) fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh` around lines 62 - 89, wait_for_pods currently only checks pod existence (count > 0) and not readiness; change it to wait for pods to be Ready/stable by using kubectl wait --for=condition=ready (or polling kubectl get pods -n "$namespace" -l "$label" -o jsonpath='{range .items[*]}{.metadata.name}:{.status.phase}:{range .status.conditions[*]}{.type}={.status};{end}{"\n"}{end}') and ensure all matching pods report Ready=true (or .status.phase==Running) before breaking; keep the same parameters (namespace, label, timeout) and update log messages (log_info/log_warning) and return codes so the function returns success only when pods are ready or returns 1 on timeout.
99-121: Use condition type filter instead of array index to avoid fragile status checks.The jsonpath
.status.conditions[0].statusassumes the first condition is always the "Ready" condition. Kubernetes resources typically have multiple conditions (e.g.,Initialized,Ready,ContainersReady), and order is not guaranteed. Filter by condition type instead:♻️ Proposed fix to query by condition type
while true; do local phase - phase=$(kubectl get redis "$name" -n "$namespace" -o jsonpath='{.status.conditions[0].status}' 2>/dev/null || echo "") + phase=$(kubectl get redis "$name" -n "$namespace" -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' 2>/dev/null || echo "") if [[ "$phase" == "True" ]]; then log_success "Redis $name is ready" break fiThis pattern aligns with how conditions are queried throughout the codebase (dashboard, packagesource types, and other operators).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh` around lines 99 - 121, The wait_for_redis_ready function uses the fragile jsonpath .status.conditions[0].status; change the kubectl jsonpath to select the condition by type (e.g. .status.conditions[?(@.type=="Ready")].status) so you explicitly read the Ready condition, keep the same variable name phase and the same truth check, and preserve the timeout/sleep logic and fallback to empty string on kubectl errors.packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh (1)
67-69: Consider: Awk column extraction assumes fixedkubectloutput format.The
awk '{print $7}'assumes the NODE column is always the 7th field inkubectl get pods -o wideoutput. While this is standard, using-o custom-columnsor-o jsonpathwould be more robust.For a demo script, this is acceptable, but for production tooling, consider:
kubectl get pods -n "$NAMESPACE" -o jsonpath='{range .items[*]}{.spec.nodeName}{"\n"}{end}' | sort | uniq -c | sort -rn🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh` around lines 67 - 69, Replace the brittle awk column extraction used in the "kubectl get pods -n \"$NAMESPACE\" -o wide" pipeline (the awk '{print $7}' reference) with a robust jsonpath-based extraction of the pod's node name (use kubectl -o jsonpath to read .spec.nodeName for each item), then pipe the results to sort | uniq -c | sort -rn so the log_step "Pod count per node:" output no longer depends on column positions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.sh`:
- Around line 49-51: The log message does not match the executed command: update
the call to log_command so it logs the exact command being run (the one using
redis-"$REDIS_NAME") or alternatively change the executed command to match the
logged form; specifically adjust the invocation around the REDIS_NAME usage so
the logged string and the actual kubectl invocation are identical (look for
log_command and the subsequent kubectl patch rf redis-"$REDIS_NAME" line and
make both use the same resource name form).
---
Nitpick comments:
In `@packages/system/cozystack-scheduler/examples/demo/00-helpers.sh`:
- Around line 91-97: The show_pods function uses the confusing redirection "2>&1
>&2" for the kubectl get pods call; replace that with a single clear redirect so
both stdout and stderr go to stderr (e.g., change the kubectl get pods -n
"$namespace" -o wide 2>&1 >&2 invocation to a single redirection such as kubectl
get pods -n "$namespace" -o wide >&2) to ensure all output is sent to stderr;
update the kubectl line inside the show_pods function accordingly.
- Around line 62-89: wait_for_pods currently only checks pod existence (count >
0) and not readiness; change it to wait for pods to be Ready/stable by using
kubectl wait --for=condition=ready (or polling kubectl get pods -n "$namespace"
-l "$label" -o jsonpath='{range
.items[*]}{.metadata.name}:{.status.phase}:{range
.status.conditions[*]}{.type}={.status};{end}{"\n"}{end}') and ensure all
matching pods report Ready=true (or .status.phase==Running) before breaking;
keep the same parameters (namespace, label, timeout) and update log messages
(log_info/log_warning) and return codes so the function returns success only
when pods are ready or returns 1 on timeout.
- Around line 99-121: The wait_for_redis_ready function uses the fragile
jsonpath .status.conditions[0].status; change the kubectl jsonpath to select the
condition by type (e.g. .status.conditions[?(@.type=="Ready")].status) so you
explicitly read the Ready condition, keep the same variable name phase and the
same truth check, and preserve the timeout/sleep logic and fallback to empty
string on kubectl errors.
In `@packages/system/cozystack-scheduler/examples/demo/09-spread-demo.sh`:
- Around line 67-69: Replace the brittle awk column extraction used in the
"kubectl get pods -n \"$NAMESPACE\" -o wide" pipeline (the awk '{print $7}'
reference) with a robust jsonpath-based extraction of the pod's node name (use
kubectl -o jsonpath to read .spec.nodeName for each item), then pipe the results
to sort | uniq -c | sort -rn so the log_step "Pod count per node:" output no
longer depends on column positions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e0252ae8-7187-43ea-aaa5-f586d23a6fd3
📒 Files selected for processing (14)
packages/system/cozystack-scheduler/examples/demo/00-helpers.shpackages/system/cozystack-scheduler/examples/demo/01-scheduling-classes.shpackages/system/cozystack-scheduler/examples/demo/02-create-tenants.shpackages/system/cozystack-scheduler/examples/demo/03-colocate-demo.shpackages/system/cozystack-scheduler/examples/demo/04-colocate-second-redis.shpackages/system/cozystack-scheduler/examples/demo/05-colocate-cleanup.shpackages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.shpackages/system/cozystack-scheduler/examples/demo/07-onepernode-second-redis.shpackages/system/cozystack-scheduler/examples/demo/08-onepernode-cleanup.shpackages/system/cozystack-scheduler/examples/demo/09-spread-demo.shpackages/system/cozystack-scheduler/examples/demo/10-spread-cleanup.shpackages/system/cozystack-scheduler/examples/demo/11-node2-demo.shpackages/system/cozystack-scheduler/examples/demo/12-node2-cleanup.shpackages/system/cozystack-scheduler/examples/demo/13-teardown.sh
| log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'" | ||
|
|
||
| kubectl patch rf redis-"$REDIS_NAME" -n "$NAMESPACE" --type=merge -p '{"spec":{"sentinel":{"replicas":2}}}' |
There was a problem hiding this comment.
Log message doesn't match actual command executed.
The log_command displays kubectl patch rf $REDIS_NAME (which would show redis-demo), but the actual command patches rf redis-"$REDIS_NAME" (which is redis-redis-demo). This creates a misleading demo output.
🔧 Proposed fix to align logged and executed commands
-log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"
+log_command "kubectl patch rf redis-$REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log_command "kubectl patch rf $REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'" | |
| kubectl patch rf redis-"$REDIS_NAME" -n "$NAMESPACE" --type=merge -p '{"spec":{"sentinel":{"replicas":2}}}' | |
| log_command "kubectl patch rf redis-$REDIS_NAME -n $NAMESPACE --type=merge -p '{\"spec\":{\"sentinel\":{\"replicas\":2}}}'" | |
| kubectl patch rf redis-"$REDIS_NAME" -n "$NAMESPACE" --type=merge -p '{"spec":{"sentinel":{"replicas":2}}}' |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/system/cozystack-scheduler/examples/demo/06-onepernode-demo.sh`
around lines 49 - 51, The log message does not match the executed command:
update the call to log_command so it logs the exact command being run (the one
using redis-"$REDIS_NAME") or alternatively change the executed command to match
the logged form; specifically adjust the invocation around the REDIS_NAME usage
so the logged string and the actual kubectl invocation are identical (look for
log_command and the subsequent kubectl patch rf redis-"$REDIS_NAME" line and
make both use the same resource name form).
|
Hey @lllamnyp could you fix DCO please? |
What this PR does
Publish the scripts used for running the Cozystack scheduler demo at the community meetup on 19.03.2026.
Release note
Summary by CodeRabbit
New Features
Documentation