Search⌘ K
AI Features

Feature Flags and Remote Configuration for Mobile Applications

Explore how to design and implement mobile feature flags and remote configurations to safely control app behavior and releases. Understand server and client SDK roles, delivery methods like polling and push, and how to ensure offline resilience and consistent user experience across app versions. This lesson prepares you to architect systems that decouple deployment from feature activation while maintaining low latency and handling mobile-specific constraints.

Mobile software distribution differs from web deployment because binary releases cannot be rolled back. A bug in a distributed version persists until the user installs an update. Feature flagsA feature flag is a server-controlled switch, either boolean or multivariate, that governs which code paths execute at runtime. and remote configurationA set of key-value pairs that allows updating app content, UI styles, or behavior (e.g., changing a banner color or threshold value) without redeploying the app. mitigate this risk by decoupling code deployment from feature activation. This allows teams to ship code in a dormant state and enable it via server-side configuration without an app store submission.

This lesson covers the architecture of a high-scale feature flag service, focusing on SDK integration, delivery protocols, and mobile-specific constraints like intermittent connectivity and version fragmentation.

System Design for a feature flag service

A production feature flag system is composed of three cooperating layers, each with a distinct responsibility in the request life cycle.

  • Management dashboard: This is where product managers and engineers define flags, configure targeting rules such as user segments or geographic regions, and adjust rollout percentages using slider controls. It writes flag definitions and rules into the persistent data store.

  • Configuration server cluster: A set of stateless APIA web service that treats each client request as an independent, isolated transaction, completely unrelated to previous or subsequent requests. The server does not store any client-specific context or session data between requests.a web service that treats each client request as an independent, isolated transaction, completely unrelated to previous or subsequent requests. The server does not store any client-specific context or session data between requests. nodes sits behind a load balancer. When a mobile client requests its flag set, the server reads the user context from the request, evaluates all targeting rules against a fast data store like Redis or DynamoDB, and returns the resolved configuration as a JSON payload. Because these nodes are stateless, horizontal scaling is straightforward.

  • Client SDK: A lightweight library embedded in the mobile app handles fetching, caching, and exposing flag values to application code. It is the component that application developers interact with directly.

SDK life cycle and evaluation models

The client SDK follows a well-defined life cycle. It initializes with an API key and a user context object containing attributes like user ID, app version, OS, and locale. On initialization, it performs an initial fetch of the full flag set from the configuration server. The response is persisted to a local cache, using SharedPreferences on Android or UserDefaults on iOS, so that subsequent reads are synchronous and do not block the UI thread. After the initial fetch, the SDK either polls the server on a fixed interval or listens for updates via server-sent events.

Two evaluation models determine where flag resolution happens. In server-side evaluationAn approach where the configuration server resolves all flag values for a given user context and returns only the final computed results, keeping targeting rules hidden from the client., the SDK sends the user context to the server and receives pre-resolved values. This keeps targeting logic secret and simplifies the SDK. In client-side evaluation, the SDK downloads all rules and evaluates them locally, which reduces latency and works fully offline but exposes targeting logic to anyone who inspects the app binary.

Practical tip: For most mobile applications, server-side evaluation is the safer default. It prevents reverse engineering of rollout rules and reduces SDK complexity at the cost of requiring network access for fresh evaluations.

Flag evaluation must be completed in under 10 milliseconds because it sits in the critical rendering path of mobile screens. The configuration server must also handle millions of concurrent clients with bursty traffic patterns, especially during peak hours when users simultaneously launch the app.

The following diagram illustrates how these three layers interact during a typical flag resolution request.

Loading D2 diagram...
Feature flag system architecture with management dashboard, configuration server cluster, and mobile client SDK

With the overall architecture established, the next critical decision is how configuration updates travel from the server to millions of mobile devices.

Real-time vs. cached configuration delivery

Mobile networks are unreliable and metered. A user might be on a fast Wi-Fi connection one moment and in a subway tunnel the next. The delivery mechanism for configuration updates must balance freshness against bandwidth consumption, battery drain, and offline resilience.

Three strategies exist, each with increasing real-time capability.

  • Periodic polling: The SDK fetches the full configuration on a fixed interval, such as every 15 minutes. This approach is simple to implement and produces a predictable load on the server. However, it introduces staleness windows. If a critical kill switch is activated, up to 15 minutes may pass before all clients receive the update.

  • Server-sent events (SSE): A persistent HTTP connection streams incremental updates from the server to the client in near real time. Propagation latency drops to sub-second, but maintaining millions of open connections demands significant infrastructure. A pub/sub layerA messaging pattern where publishers broadcast messages to a channel and all subscribed consumers receive them, enabling efficient fan-out of updates to many clients simultaneously. using Redis, Pub/Sub, or Kafka fans out changes to edge nodes that maintain client connections.

  • Silent push notifications: A silent push sent through APNs or FCM tells the SDK to fetch a fresh configuration on demand. This leverages existing push infrastructure and avoids persistent connections, but delivery is best effort. The OS may throttle or delay silent pushes to conserve battery.

Attention: Relying solely on SSE for kill switch propagation is risky on mobile. iOS and Android aggressively terminate background connections to conserve battery, meaning your persistent connection may not be alive when you need it most.

A production-grade system combines these strategies. Polling serves as the reliable baseline. Silent push notifications or an SSE layer on top for critical flag changes like kill switches. The SDK always reads from its local cache first, guaranteeing zero-latency flag resolution at the application layer regardless of network state.

The following table summarizes the trade-offs across these delivery strategies.

Comparison of real-time data delivery strategies

Delivery strategy

Freshness latency

Infrastructure complexity

Battery/Bandwidth impact

Best use case

Periodic polling

Seconds to minutes (depending on interval)

Low (stateless HTTP)

Moderate (repeated full fetches)

General config updates with tolerance for staleness

Server-sent events (SSE)

Sub-second

High (persistent connections, pub/sub fan-out)

Higher (open connection, keep-alives)

Real-time kill switches and instant rollouts

Silent push notification

Seconds (best-effort)

Medium (relies on APNs/FCM)

Low (on-demand fetch only)

Triggering urgent config refreshes

Hybrid (polling + push)

Seconds for critical, minutes for routine

Medium-high

Balanced

Production-grade mobile feature flag systems

With a delivery mechanism in place, the next question is how to control which users see a new feature and how quickly.

Designing safe rollout mechanisms

Exposing a feature to 100% of mobile users simultaneously is a high-risk operation. Mobile telemetry has higher latency than server-side metrics, and sampling gaps mean crash spikes may not surface for minutes. Gradual rollout mechanisms reduce this risk by controlling exposure incrementally.

Percentage-based rollouts and targeting

In a percentage-based rollout, the configuration server hashes the user ID combined with the flag key to produce a deterministic bucketA fixed numerical assignment (0–99) derived from hashing a user identifier, ensuring the same user always lands in the same bucket across evaluations. between 0 and 99. If the user’s bucket falls below the current rollout percentage, they receive the new feature. A user assigned to bucket 15 during a 10% rollout will still see the feature when the rollout expands to 50%, because 15 remains below 50. This deterministic assignment prevents flickering, where a user sees a feature in one session and loses it in the next.

Targeting rules add another dimension. Flags can be scoped by attributes such as app version, OS version, locale, subscription tier, or custom cohorts. A canary release might target only internal testers or beta users before any public exposure. The server evaluates these rules in priority order, and the first matching rule determines the flag value.

Kill switches and experimentation

A kill switch is a flag override that forces a feature off for all users, regardless of any targeting rules. When activated, it takes the highest priority in the evaluation chain. The latency requirement for kill switch propagation, typically under 10 seconds, justifies the hybrid delivery model discussed earlier.

Feature flags also extend naturally into A/B testing. The same bucketing mechanism assigns users to control and variant groups. The SDK logs exposure eventsTelemetry records indicating that a specific user was shown a particular variant of a feature, used downstream for statistical analysis of the experiment's impact. each time a flag is evaluated, and these events feed into an analytics pipeline for statistical analysis.

Note: Deterministic bucketing is what makes gradual rollouts safe. Without it, a user could oscillate between the old and new experience across sessions, corrupting both the user experience and your experiment data.

This SDK pattern ensures the app doesn’t block on a network call to resolve a flag. But what happens when the network is completely unavailable?

Offline scenarios and version consistency

When a user opens the app in airplane mode or deep in a subway tunnel, the SDK must serve a valid configuration from its local cache without blocking the UI. The cache-first pattern handles this for returning users who have previously fetched a configuration. The harder problem is the cold-start scenario, where the app launches for the very first time after installation and has no cache at all.

To solve cold starts, the SDK ships with a bundled default configuration compiled directly into the app binary. This static JSON file contains safe default values for every known flag, ensuring the app renders a coherent experience even if the first network fetch fails entirely.

Version skew introduces another layer of complexity. At any given moment, millions of devices run different app versions. A flag targeting “app version >= 5.2” must evaluate correctly even when version 5.0 users receive the same configuration payload. Server-side evaluation handles this cleanly because the server inspects the app version from the user context before resolving flags. If client-side evaluation is used instead, the SDK must filter flags locally based on its own version.

Feature flags are eventually consistent by nature. A user might see a feature in one session and not in the next if a rollout percentage decreases between fetches. The system minimizes this flickering by persisting the user’s assigned bucket locally and only re-evaluating when the server signals an explicit flag change, not on every polling cycle.

Practical tip: Use ETags or last-modified timestamps in your config fetch requests. The server responds with a 304 Not Modified when nothing has changed, avoiding redundant downloads and saving bandwidth on metered mobile connections.

Test Your Knowledge!

1.

A user is assigned to bucket 15 during a 20% rollout. The rollout is later increased to 50%. What happens to this user?

A.

They are randomly re-assigned to a new bucket

B.

They continue to see the feature because bucket 15 < 50

C.

They lose access because the hash is recalculated

D.

The server removes them from the experiment


1 / 3

With the technical building blocks in place, let’s see how they combine into a complete system.

Putting it all together

A production-grade mobile feature flag system combines a management dashboard for flag definition and targeting, a horizontally scalable configuration server with sub-10ms evaluation backed by a fast cache layer, a lightweight client SDK that reads from a local persistent cache first and refreshes via a hybrid polling-plus-push delivery model, and deterministic bucketing for consistent gradual rollouts with kill switch overrides.

What distinguishes this from server-side feature flagging is the set of mobile-specific constraints that permeate every architectural decision. Offline resilience demands bundled defaults and persistent caching. Battery-conscious delivery strategies rule out naive persistent connections. Version-aware targeting must account for a fragmented device ecosystem where dozens of app versions coexist simultaneously. Flag evaluation latency directly impacts app startup and screen rendering times, making sub-10ms resolution a hard requirement rather than a nice-to-have.

Conclusion

Designing a mobile feature flag service requires balancing data consistency with high availability. Because mobile clients operate in unreliable network conditions, the architecture must prioritize local persistence and synchronous flag resolution.

Deterministic hashing ensures users remain in their assigned segments throughout a rollout, while hybrid delivery combines polling for reliability with push-based triggers for urgent state changes like kill switches. By centralizing complex targeting logic through server-side evaluation and ensuring bundled defaults, mobile teams can achieve the same deployment safety found in server-side environments.