Le360 is one of Morocco's most-read digital news platforms. When we were brought in as freelance engineers to modernize their React Native mobile infrastructure, the challenge was clear: the app had to handle unpredictable traffic spikes (breaking news can 10x traffic in under a minute), serve high-resolution editorial photography efficiently, and maintain a readable experience even on 3G connections in rural areas.
“A news app lives and dies by the first scroll. Every millisecond of latency between open and readable headline is a reader you will not get back.”
The first architectural decision was aggressive caching at multiple layers. API responses for the article feed were cached client-side using a stale-while-revalidate pattern — the app always shows something immediately, then quietly updates. We implemented an offline reading queue using AsyncStorage with a content size limit and a FIFO eviction policy. Readers could save articles on Wi-Fi and read them on the train with zero connectivity.
Image delivery was the biggest performance bottleneck. Editorial photography at Le360 averages 1.2 MB per image. Multiplied by 20 articles in the feed view, that is 24 MB on first render — fatal on a slow connection. We integrated a CDN-backed responsive image pipeline: small thumbnails (40KB) loaded immediately for the list view, full images loaded only when an article was opened. Perceived feed load time dropped by 70%.
Breaking news required a different pattern entirely. A webhook from the CMS triggered a push notification via Firebase, but also invalidated the client-side cache for the top stories feed. We implemented a lightweight long-polling fallback for devices where push notifications were disabled — checking every 90 seconds when the app was in foreground. During a major political story, the app served 340,000 concurrent readers with no degradation in feed response time.
The final piece was analytics without performance cost. The existing implementation fired a network request on every article impression — creating hundreds of requests per session. We batched impression events in a local queue, flushed every 30 seconds or when the app backgrounded, and used a compressed binary format for the payload. Analytics data completeness actually improved (fewer dropped events) while network usage fell by 90%.