SCA Auto-Resolve Latency & SCA Scan Processing Delays

Minor incident EU Environment API (EU Environment) Application/UI (EU Environment) US Environment API (US Environment) Application/UI (US Environment)
2025-12-08 23:43 IST · 21 hours, 28 minutes

Updates

Post-mortem

Summary
Between December 6 and December 10, 2025, customers experienced delays in SCA scan processing and failures in the Auto-Resolve feature. This resulted in longer scan times and unresolved violations for some users. The issue began on December 6, 2025, at 09:00 IDT, and was caused by a combination of increased system load and delayed processing. The situation was resolved after isolating high-load customer traffic and scaling system resources. Normal service was restored, and follow-up improvements are underway.

Key Timeline (IDT)
December 6, 2025, 09:00 IDT – Initial slowdown in SCA scan performance observed
December 6, 2025, 15:00 IDT – Significant processing delays detected
December 6, 2025, 15:30 IDT – System resources increased to address delays
December 6, 2025, 16:00–18:30 IDT – Customers initiate additional scans, increasing load
December 6, 2025, 19:00 IDT – Non-essential automated processes disabled to reduce load
December 6, 2025, 21:00 IDT – Further scaling of system resources
December 9, 2025 – Additional investigation identifies a high-impact configuration
December 10, 2025 – High-impact configuration disabled; system performance returns to normal

Root Cause
The incident was triggered by a combination of increased demand from a specific customer configuration and delayed processing in the update workflow. This led to a feedback loop where customers initiated more scans, further increasing system load and causing delays to become visible.

Actions Taken

  1. Disabled non-essential automated processes to reduce system load
  2. Increased system resource allocation
  3. Isolated high-load customer traffic to prevent further impact
  4. Updated operational procedures to improve response to similar issues

Action Items

  1. Improve monitoring to detect similar delays earlier
  2. Add safeguards to prevent disproportionate resource usage by individual configurations
  3. Enhance system stability under high load conditions
January 8, 2026 · 14:05 IST
Resolved

This issue is resolved. The queue has cleared, and both SCA scans and auto-resolve processing times have returned to normal and are significantly faster than during the incident. We will share an RCA and next action items in the coming days as an attachment to this incident.

December 9, 2025 · 21:10 IST
Monitoring

We’ve identified the root cause of the SCA auto-resolve and scan latency. Queue lag is steadily decreasing and processing times are improving. We are continuing to monitor closely and will provide further updates as needed.

December 9, 2025 · 13:42 IST
Issue

Summary
We are currently experiencing increased latency in two SCA areas:

  • Auto-resolve processing after customers push a fix for a CVE/violation.
  • Manual SCA scans, which may also take longer than expected to complete and reflect updated violation status.

As a result, customers may continue seeing violations as Open for a period of time even after the fixing commit is pushed and/or a rescan is triggered.

Customer Impact

Violations that should transition to Resolved after a fix may remain Open temporarily.
Triggering a new scan may not immediately update the violation state.
No data loss is expected; this is a delay in processing and state updates.

What’s likely happening (flow)

  • The typical flow during this incident looks like:
  • Developers push a fix for a CVE (dependency update / patch commit).
  • The developer still sees the violation as Open in Cycode.
  • They reach out for confirmation.
  • Cycode users check and confirm the fix was applied, but the violation still shows Open.
  • Auto-resolve logic is triggered by the push event, but is processed with delay.
  • Developers then trigger another SCA scan to force resolution.
  • That scan also queues behind the same pipeline, so results and state updates still take time.

Root Cause (preliminary)

Auto-resolve events and SCA scans are currently being processed through the same execution queue/pipeline, which is experiencing higher-than-normal load.
This shared queue is causing backlog and delayed processing for both:

  • Auto-resolve-from-push events
  • On-demand SCA scans

Mitigation / Current Actions

  • Our SCA team is actively working to reduce queue latency and restore normal processing times.
  • We are monitoring backlog depth and scan/auto-resolve throughput.
  • Prioritization is being applied to speed up resolution events.

Workaround (if needed)

  • No action is required for correctness; the system will eventually reflect the resolved state.
  • If you re-scan, please expect delays while the backlog clears.

Next Update

We’ll provide another update once latency returns to normal or if timelines change.

December 8, 2025 · 23:43 IST

← Back