Problem
Scaled from 1 → 3 replicas. Started getting 3 duplicate polling messages every 30 seconds.
Root cause
Warning
time.Tickerfires relative to each pod’s start time, not a shared wall clock — so staggered replicas generate different timestamps, different keys, and all win the SETNX race.
time.Ticker fires relative to each pod’s start time, not a shared wall clock. With staggered rollout, Pod 1 fires at :26.544, Pod 2 at :29.429, Pod 3 at :32.391 — three different timestamps → three different Redis keys → all three SETNX calls succeed.
Fix: align timestamp to interval boundary
alignedTimestamp := (timestamp / intervalMs) * intervalMs
key := fmt.Sprintf("gocpi:publish:%s:%d", taskType, alignedTimestamp)With a 30s interval, all three pods produce the same key regardless of when they fire within the window:
Pod 1: 1768965726544 → 1768965720000
Pod 2: 1768965729429 → 1768965720000 ← same
Pod 3: 1768965732391 → 1768965720000 ← same
First pod to SETNX wins and publishes. The others see the key already exists and skip.
TTL = 2× interval
keyTTL = intervalMs * 2 // 30s interval → 60s TTLSafety buffer for clock skew between pods. Auto-expires — no manual cleanup.
See also
- context-independent-cleanup — use context.Background() so unlock ops survive request cancellation
- parallel-polling-per-entity — publishing one message per entity after dedup