[WIP] EC-1710: Acceptance test profiling instrumentation#3229
[WIP] EC-1710: Acceptance test profiling instrumentation#3229st3penta wants to merge 2 commits intoconforma:mainfrom
Conversation
Adds a profile package to the acceptance tests that tracks per-scenario wall-clock timing, container startup timing by type (registry, git, wiremock), Kind cluster creation sub-phases, and peak container concurrency. The profiling report is printed to stderr at the end of the test run, making it visible in GitHub Actions logs. This is temporary instrumentation for the EC-1710 performance spike. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Important Review skippedIgnore keyword(s) in the title. ⛔ Ignored keywords (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Plus Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoAdd acceptance test profiling instrumentation for EC-1710
WalkthroughsDescription• Add profiling instrumentation to acceptance tests for performance analysis • Track per-scenario wall-clock timing to identify slowest test scenarios • Instrument container startup statistics (registry, git, wiremock) with count, total, avg, min, max metrics • Break down Kind cluster creation into sub-phases (node creation, config, CLI image, task bundle) • Monitor peak container concurrency and output profiling report to stderr Diagramflowchart LR
Init["profile.Init()"]
ScenarioStart["ScenarioStart()"]
ContainerTiming["BeginContainer()"]
PhaseTiming["Begin()"]
ScenarioEnd["ScenarioEnd()"]
Report["Report()"]
Init --> ScenarioStart
ScenarioStart --> ContainerTiming
ScenarioStart --> PhaseTiming
ContainerTiming --> ScenarioEnd
PhaseTiming --> ScenarioEnd
ScenarioEnd --> Report
Report -->|Output| Stderr["stderr report"]
Report -->|Output| JsonL["events.jsonl"]
Report -->|Output| ReportTxt["report.txt"]
File Changes1. acceptance/profile/profile.go
|
Code Review by Qodo
|
| // BeginContainer starts timing a container creation. Call the returned function when ready. | ||
| func BeginContainer(containerType string) func() { | ||
| if !enabled { | ||
| return func() {} | ||
| } | ||
| start := time.Now() | ||
| return func() { | ||
| dur := time.Since(start) | ||
| mu.Lock() | ||
| stats, ok := containerTypes[containerType] | ||
| if !ok { | ||
| stats = &containerStats{Min: dur} | ||
| containerTypes[containerType] = stats | ||
| } | ||
| stats.Count++ | ||
| stats.Total += dur | ||
| if dur < stats.Min { | ||
| stats.Min = dur | ||
| } | ||
| if dur > stats.Max { | ||
| stats.Max = dur | ||
| } | ||
| mu.Unlock() | ||
|
|
||
| current := activeContainers.Add(1) | ||
| mu.Lock() | ||
| if current > peakContainers { | ||
| peakContainers = current | ||
| } | ||
| mu.Unlock() | ||
|
|
||
| logEvent(map[string]any{ | ||
| "event": "container", "type": containerType, | ||
| "duration_ms": dur.Milliseconds(), "ts": time.Now().Format(time.RFC3339), | ||
| }) | ||
| } |
There was a problem hiding this comment.
1. Container concurrency miscount 🐞 Bug ≡ Correctness
profile.BeginContainer increments activeContainers but nothing ever decrements it, so the report’s “Peak concurrent containers”/“Final active containers” reflect cumulative startup completions (and can include failed startups) rather than true active container concurrency.
Agent Prompt
## Issue description
`BeginContainer` increments `activeContainers` but the value is never decremented, yet the report labels it as “active”/“concurrent” containers. Error paths also call `endContainer()` which increments the counters even when `GenericContainer` fails.
## Issue Context
The current code only has a “startup finished” callback and does not have any hook for container stop/teardown, so it cannot accurately represent *active lifetime* container concurrency unless you either (a) add a decrement hook on teardown, or (b) rename/report a different metric (e.g., “containers started” or “concurrent container startups”).
## Fix Focus Areas
- acceptance/profile/profile.go[124-160]
- acceptance/profile/profile.go[225-257]
- acceptance/git/git.go[183-197]
- acceptance/registry/registry.go[112-126]
- acceptance/wiremock/wiremock.go[221-235]
## Suggested fix approach
- Stop counting failed startups: only record container stats and increment any counters after a successful startup.
- Either:
- **Option A (recommended for correctness):** change the API to support both success/failure and teardown:
- `BeginContainer(type) (finish func(success bool), stopped func())`
- `finish(true)` records duration and increments an `activeContainers` gauge; `stopped()` decrements.
- **Option B:** if no teardown hook exists, rename the metric to match behavior (e.g., `containersStarted`) and remove “active/concurrent” wording from the report.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Codecov Report✅ All modified and coverable lines are covered by tests.
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
The profiling report written to stderr by go test was not captured in GitHub Actions job logs. Add a step to print the report file and a step to upload the profiling directory as an artifact so the data is always accessible after CI runs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
What this does
Adds a
profilepackage to the acceptance tests that instruments the test suite to collect:The profiling report is printed to stderr at the end of the test run, so it will be visible in the GitHub Actions job logs for the acceptance test runs.
Files changed
acceptance/profile/profile.goacceptance/acceptance_test.goacceptance/kubernetes/kind/kind.goacceptance/registry/registry.goacceptance/git/git.goacceptance/wiremock/wiremock.goTest plan
🤖 Generated with Claude Code